00:00:00.000 Started by upstream project "autotest-per-patch" build number 132091 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.107 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.108 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.110 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.183 Fetching changes from the remote Git repository 00:00:00.184 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.250 Using shallow fetch with depth 1 00:00:00.250 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.250 > git --version # timeout=10 00:00:00.309 > git --version # 'git version 2.39.2' 00:00:00.309 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.353 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.353 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.392 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.405 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.419 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.419 > git config core.sparsecheckout # timeout=10 00:00:05.432 > git read-tree -mu HEAD # timeout=10 00:00:05.449 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.467 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.467 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.556 [Pipeline] Start of Pipeline 00:00:05.566 [Pipeline] library 00:00:05.567 Loading library shm_lib@master 00:00:05.567 Library shm_lib@master is cached. Copying from home. 00:00:05.581 [Pipeline] node 00:00:05.591 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.593 [Pipeline] { 00:00:05.600 [Pipeline] catchError 00:00:05.601 [Pipeline] { 00:00:05.610 [Pipeline] wrap 00:00:05.616 [Pipeline] { 00:00:05.623 [Pipeline] stage 00:00:05.625 [Pipeline] { (Prologue) 00:00:05.831 [Pipeline] sh 00:00:06.116 + logger -p user.info -t JENKINS-CI 00:00:06.132 [Pipeline] echo 00:00:06.134 Node: CYP12 00:00:06.141 [Pipeline] sh 00:00:06.443 [Pipeline] setCustomBuildProperty 00:00:06.454 [Pipeline] echo 00:00:06.455 Cleanup processes 00:00:06.461 [Pipeline] sh 00:00:06.748 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.748 3547738 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.763 [Pipeline] sh 00:00:07.053 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.053 ++ grep -v 'sudo pgrep' 00:00:07.053 ++ awk '{print $1}' 00:00:07.053 + sudo kill -9 00:00:07.053 + true 00:00:07.067 [Pipeline] cleanWs 00:00:07.078 [WS-CLEANUP] Deleting project workspace... 00:00:07.078 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.086 [WS-CLEANUP] done 00:00:07.090 [Pipeline] setCustomBuildProperty 00:00:07.102 [Pipeline] sh 00:00:07.383 + sudo git config --global --replace-all safe.directory '*' 00:00:07.460 [Pipeline] httpRequest 00:00:07.916 [Pipeline] echo 00:00:07.917 Sorcerer 10.211.164.101 is alive 00:00:07.925 [Pipeline] retry 00:00:07.926 [Pipeline] { 00:00:07.936 [Pipeline] httpRequest 00:00:07.940 HttpMethod: GET 00:00:07.941 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.941 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.957 Response Code: HTTP/1.1 200 OK 00:00:07.957 Success: Status code 200 is in the accepted range: 200,404 00:00:07.958 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:19.357 [Pipeline] } 00:00:19.373 [Pipeline] // retry 00:00:19.380 [Pipeline] sh 00:00:19.668 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:19.685 [Pipeline] httpRequest 00:00:20.422 [Pipeline] echo 00:00:20.423 Sorcerer 10.211.164.101 is alive 00:00:20.432 [Pipeline] retry 00:00:20.434 [Pipeline] { 00:00:20.444 [Pipeline] httpRequest 00:00:20.447 HttpMethod: GET 00:00:20.448 URL: http://10.211.164.101/packages/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:00:20.448 Sending request to url: http://10.211.164.101/packages/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:00:20.457 Response Code: HTTP/1.1 200 OK 00:00:20.457 Success: Status code 200 is in the accepted range: 200,404 00:00:20.458 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:04:33.399 [Pipeline] } 00:04:33.420 [Pipeline] // retry 00:04:33.428 [Pipeline] sh 00:04:33.718 + tar --no-same-owner -xf spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:04:37.033 [Pipeline] sh 00:04:37.324 + git -C spdk log --oneline -n5 00:04:37.324 d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:04:37.324 a59d7e018 lib/mlx5: Add API to check if UMR registration supported 00:04:37.324 f6925f5e4 accel/mlx5: Merge crypto+copy to reg UMR 00:04:37.324 008a6371b accel/mlx5: Initial implementation of mlx5 platform driver 00:04:37.324 cc533a3e5 nvme/nvme: Factor out submit_request function 00:04:37.337 [Pipeline] } 00:04:37.353 [Pipeline] // stage 00:04:37.365 [Pipeline] stage 00:04:37.368 [Pipeline] { (Prepare) 00:04:37.394 [Pipeline] writeFile 00:04:37.412 [Pipeline] sh 00:04:37.703 + logger -p user.info -t JENKINS-CI 00:04:37.717 [Pipeline] sh 00:04:38.005 + logger -p user.info -t JENKINS-CI 00:04:38.019 [Pipeline] sh 00:04:38.309 + cat autorun-spdk.conf 00:04:38.309 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:38.309 SPDK_TEST_NVMF=1 00:04:38.309 SPDK_TEST_NVME_CLI=1 00:04:38.309 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:38.309 SPDK_TEST_NVMF_NICS=e810 00:04:38.309 SPDK_TEST_VFIOUSER=1 00:04:38.309 SPDK_RUN_UBSAN=1 00:04:38.309 NET_TYPE=phy 00:04:38.317 RUN_NIGHTLY=0 00:04:38.323 [Pipeline] readFile 00:04:38.352 [Pipeline] withEnv 00:04:38.354 [Pipeline] { 00:04:38.366 [Pipeline] sh 00:04:38.653 + set -ex 00:04:38.653 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:38.653 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:38.653 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:38.653 ++ SPDK_TEST_NVMF=1 00:04:38.653 ++ SPDK_TEST_NVME_CLI=1 00:04:38.653 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:38.653 ++ SPDK_TEST_NVMF_NICS=e810 00:04:38.653 ++ SPDK_TEST_VFIOUSER=1 00:04:38.653 ++ SPDK_RUN_UBSAN=1 00:04:38.653 ++ NET_TYPE=phy 00:04:38.653 ++ RUN_NIGHTLY=0 00:04:38.653 + case $SPDK_TEST_NVMF_NICS in 00:04:38.653 + DRIVERS=ice 00:04:38.653 + [[ tcp == \r\d\m\a ]] 00:04:38.653 + [[ -n ice ]] 00:04:38.653 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:38.653 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:38.653 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:38.653 rmmod: ERROR: Module irdma is not currently loaded 00:04:38.653 rmmod: ERROR: Module i40iw is not currently loaded 00:04:38.653 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:38.653 + true 00:04:38.653 + for D in $DRIVERS 00:04:38.653 + sudo modprobe ice 00:04:38.653 + exit 0 00:04:38.662 [Pipeline] } 00:04:38.678 [Pipeline] // withEnv 00:04:38.683 [Pipeline] } 00:04:38.697 [Pipeline] // stage 00:04:38.708 [Pipeline] catchError 00:04:38.710 [Pipeline] { 00:04:38.723 [Pipeline] timeout 00:04:38.724 Timeout set to expire in 1 hr 0 min 00:04:38.725 [Pipeline] { 00:04:38.739 [Pipeline] stage 00:04:38.741 [Pipeline] { (Tests) 00:04:38.757 [Pipeline] sh 00:04:39.048 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:39.048 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:39.048 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:39.048 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:39.048 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.048 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:39.048 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:39.048 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:39.048 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:39.048 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:39.048 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:39.048 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:39.048 + source /etc/os-release 00:04:39.048 ++ NAME='Fedora Linux' 00:04:39.048 ++ VERSION='39 (Cloud Edition)' 00:04:39.048 ++ ID=fedora 00:04:39.048 ++ VERSION_ID=39 00:04:39.048 ++ VERSION_CODENAME= 00:04:39.048 ++ PLATFORM_ID=platform:f39 00:04:39.048 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:39.048 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:39.048 ++ LOGO=fedora-logo-icon 00:04:39.048 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:39.048 ++ HOME_URL=https://fedoraproject.org/ 00:04:39.048 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:39.048 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:39.048 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:39.048 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:39.048 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:39.048 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:39.048 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:39.048 ++ SUPPORT_END=2024-11-12 00:04:39.048 ++ VARIANT='Cloud Edition' 00:04:39.048 ++ VARIANT_ID=cloud 00:04:39.048 + uname -a 00:04:39.048 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:39.048 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:42.352 Hugepages 00:04:42.352 node hugesize free / total 00:04:42.352 node0 1048576kB 0 / 0 00:04:42.352 node0 2048kB 0 / 0 00:04:42.352 node1 1048576kB 0 / 0 00:04:42.352 node1 2048kB 0 / 0 00:04:42.352 00:04:42.352 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.352 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:42.352 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:42.352 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:42.352 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:42.352 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:42.352 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:42.352 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:42.352 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:42.352 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:42.352 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:42.352 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:42.352 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:42.352 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:42.352 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:42.352 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:42.352 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:42.352 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:42.352 + rm -f /tmp/spdk-ld-path 00:04:42.352 + source autorun-spdk.conf 00:04:42.352 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:42.352 ++ SPDK_TEST_NVMF=1 00:04:42.352 ++ SPDK_TEST_NVME_CLI=1 00:04:42.352 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:42.352 ++ SPDK_TEST_NVMF_NICS=e810 00:04:42.352 ++ SPDK_TEST_VFIOUSER=1 00:04:42.352 ++ SPDK_RUN_UBSAN=1 00:04:42.352 ++ NET_TYPE=phy 00:04:42.352 ++ RUN_NIGHTLY=0 00:04:42.352 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:42.352 + [[ -n '' ]] 00:04:42.352 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.352 + for M in /var/spdk/build-*-manifest.txt 00:04:42.352 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:42.352 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:42.352 + for M in /var/spdk/build-*-manifest.txt 00:04:42.352 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:42.352 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:42.352 + for M in /var/spdk/build-*-manifest.txt 00:04:42.352 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:42.352 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:42.352 ++ uname 00:04:42.352 + [[ Linux == \L\i\n\u\x ]] 00:04:42.352 + sudo dmesg -T 00:04:42.352 + sudo dmesg --clear 00:04:42.352 + dmesg_pid=3549419 00:04:42.352 + [[ Fedora Linux == FreeBSD ]] 00:04:42.352 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:42.352 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:42.352 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:42.352 + [[ -x /usr/src/fio-static/fio ]] 00:04:42.352 + export FIO_BIN=/usr/src/fio-static/fio 00:04:42.352 + FIO_BIN=/usr/src/fio-static/fio 00:04:42.352 + sudo dmesg -Tw 00:04:42.352 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:42.352 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:42.352 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:42.352 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:42.352 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:42.352 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:42.352 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:42.352 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:42.352 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:42.615 09:56:45 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:04:42.615 09:56:45 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:42.615 09:56:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:42.615 09:56:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:42.615 09:56:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:04:42.615 09:56:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:42.615 09:56:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:04:42.615 09:56:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:04:42.615 09:56:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:04:42.615 09:56:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:04:42.615 09:56:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:04:42.615 09:56:45 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:42.615 09:56:45 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:42.615 09:56:45 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:04:42.615 09:56:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.615 09:56:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:42.615 09:56:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:42.615 09:56:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.615 09:56:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.615 09:56:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.615 09:56:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.615 09:56:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.615 09:56:45 -- paths/export.sh@5 -- $ export PATH 00:04:42.615 09:56:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.615 09:56:45 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:42.615 09:56:45 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:42.615 09:56:45 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730883405.XXXXXX 00:04:42.615 09:56:45 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730883405.ao3djY 00:04:42.615 09:56:45 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:42.615 09:56:45 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:42.615 09:56:45 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:42.615 09:56:45 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:42.615 09:56:45 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:42.615 09:56:45 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:42.615 09:56:45 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:04:42.615 09:56:46 -- common/autotest_common.sh@10 -- $ set +x 00:04:42.615 09:56:46 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:42.615 09:56:46 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:42.615 09:56:46 -- pm/common@17 -- $ local monitor 00:04:42.615 09:56:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.615 09:56:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.615 09:56:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.615 09:56:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.615 09:56:46 -- pm/common@21 -- $ date +%s 00:04:42.615 09:56:46 -- pm/common@25 -- $ sleep 1 00:04:42.615 09:56:46 -- pm/common@21 -- $ date +%s 00:04:42.615 09:56:46 -- pm/common@21 -- $ date +%s 00:04:42.615 09:56:46 -- pm/common@21 -- $ date +%s 00:04:42.615 09:56:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730883406 00:04:42.615 09:56:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730883406 00:04:42.615 09:56:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730883406 00:04:42.615 09:56:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730883406 00:04:42.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730883406_collect-vmstat.pm.log 00:04:42.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730883406_collect-cpu-load.pm.log 00:04:42.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730883406_collect-cpu-temp.pm.log 00:04:42.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730883406_collect-bmc-pm.bmc.pm.log 00:04:43.562 09:56:47 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:04:43.562 09:56:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:43.562 09:56:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:43.562 09:56:47 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.562 09:56:47 -- spdk/autobuild.sh@16 -- $ date -u 00:04:43.562 Wed Nov 6 08:56:47 AM UTC 2024 00:04:43.562 09:56:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:43.562 v25.01-pre-170-gd1c46ed8e 00:04:43.562 09:56:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:43.562 09:56:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:43.562 09:56:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:43.562 09:56:47 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:04:43.562 09:56:47 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:04:43.562 09:56:47 -- common/autotest_common.sh@10 -- $ set +x 00:04:43.823 ************************************ 00:04:43.823 START TEST ubsan 00:04:43.823 ************************************ 00:04:43.823 09:56:47 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:04:43.823 using ubsan 00:04:43.823 00:04:43.823 real 0m0.001s 00:04:43.823 user 0m0.001s 00:04:43.823 sys 0m0.000s 00:04:43.823 09:56:47 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:43.823 09:56:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:43.823 ************************************ 00:04:43.823 END TEST ubsan 00:04:43.823 ************************************ 00:04:43.824 09:56:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:43.824 09:56:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:43.824 09:56:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:43.824 09:56:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:43.824 09:56:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:43.824 09:56:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:43.824 09:56:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:43.824 09:56:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:43.824 09:56:47 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:04:43.824 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:43.824 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:44.396 Using 'verbs' RDMA provider 00:05:00.252 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:12.487 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:12.487 Creating mk/config.mk...done. 00:05:12.487 Creating mk/cc.flags.mk...done. 00:05:12.487 Type 'make' to build. 00:05:12.487 09:57:15 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:05:12.487 09:57:15 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:12.487 09:57:15 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:12.487 09:57:15 -- common/autotest_common.sh@10 -- $ set +x 00:05:12.487 ************************************ 00:05:12.487 START TEST make 00:05:12.487 ************************************ 00:05:12.487 09:57:15 make -- common/autotest_common.sh@1127 -- $ make -j144 00:05:12.487 make[1]: Nothing to be done for 'all'. 00:05:13.873 The Meson build system 00:05:13.873 Version: 1.5.0 00:05:13.873 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:13.873 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:13.873 Build type: native build 00:05:13.873 Project name: libvfio-user 00:05:13.873 Project version: 0.0.1 00:05:13.873 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:13.873 C linker for the host machine: cc ld.bfd 2.40-14 00:05:13.873 Host machine cpu family: x86_64 00:05:13.873 Host machine cpu: x86_64 00:05:13.873 Run-time dependency threads found: YES 00:05:13.873 Library dl found: YES 00:05:13.873 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:13.873 Run-time dependency json-c found: YES 0.17 00:05:13.873 Run-time dependency cmocka found: YES 1.1.7 00:05:13.873 Program pytest-3 found: NO 00:05:13.873 Program flake8 found: NO 00:05:13.873 Program misspell-fixer found: NO 00:05:13.873 Program restructuredtext-lint found: NO 00:05:13.873 Program valgrind found: YES (/usr/bin/valgrind) 00:05:13.873 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:13.873 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:13.873 Compiler for C supports arguments -Wwrite-strings: YES 00:05:13.873 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:13.873 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:13.873 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:13.873 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:13.873 Build targets in project: 8 00:05:13.873 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:13.873 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:13.873 00:05:13.873 libvfio-user 0.0.1 00:05:13.873 00:05:13.873 User defined options 00:05:13.873 buildtype : debug 00:05:13.873 default_library: shared 00:05:13.873 libdir : /usr/local/lib 00:05:13.873 00:05:13.873 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:14.133 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:14.133 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:14.133 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:14.133 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:14.133 [4/37] Compiling C object samples/null.p/null.c.o 00:05:14.133 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:14.133 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:14.133 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:14.133 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:14.133 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:14.133 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:14.133 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:14.133 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:14.133 [13/37] Compiling C object samples/server.p/server.c.o 00:05:14.133 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:14.133 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:14.133 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:14.133 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:14.133 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:14.133 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:14.133 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:14.133 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:14.133 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:14.133 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:14.133 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:14.393 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:14.393 [26/37] Compiling C object samples/client.p/client.c.o 00:05:14.393 [27/37] Linking target samples/client 00:05:14.393 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:14.393 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:14.393 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:05:14.393 [31/37] Linking target test/unit_tests 00:05:14.393 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:14.655 [33/37] Linking target samples/null 00:05:14.655 [34/37] Linking target samples/lspci 00:05:14.655 [35/37] Linking target samples/server 00:05:14.655 [36/37] Linking target samples/gpio-pci-idio-16 00:05:14.655 [37/37] Linking target samples/shadow_ioeventfd_server 00:05:14.655 INFO: autodetecting backend as ninja 00:05:14.655 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:14.655 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:14.916 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:14.916 ninja: no work to do. 00:05:21.517 The Meson build system 00:05:21.517 Version: 1.5.0 00:05:21.517 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:21.517 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:21.517 Build type: native build 00:05:21.517 Program cat found: YES (/usr/bin/cat) 00:05:21.517 Project name: DPDK 00:05:21.517 Project version: 24.03.0 00:05:21.517 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:21.517 C linker for the host machine: cc ld.bfd 2.40-14 00:05:21.517 Host machine cpu family: x86_64 00:05:21.517 Host machine cpu: x86_64 00:05:21.517 Message: ## Building in Developer Mode ## 00:05:21.517 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:21.517 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:21.517 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:21.517 Program python3 found: YES (/usr/bin/python3) 00:05:21.517 Program cat found: YES (/usr/bin/cat) 00:05:21.517 Compiler for C supports arguments -march=native: YES 00:05:21.517 Checking for size of "void *" : 8 00:05:21.517 Checking for size of "void *" : 8 (cached) 00:05:21.517 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:21.517 Library m found: YES 00:05:21.517 Library numa found: YES 00:05:21.517 Has header "numaif.h" : YES 00:05:21.517 Library fdt found: NO 00:05:21.517 Library execinfo found: NO 00:05:21.517 Has header "execinfo.h" : YES 00:05:21.517 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:21.517 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:21.517 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:21.517 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:21.517 Run-time dependency openssl found: YES 3.1.1 00:05:21.517 Run-time dependency libpcap found: YES 1.10.4 00:05:21.517 Has header "pcap.h" with dependency libpcap: YES 00:05:21.517 Compiler for C supports arguments -Wcast-qual: YES 00:05:21.517 Compiler for C supports arguments -Wdeprecated: YES 00:05:21.517 Compiler for C supports arguments -Wformat: YES 00:05:21.517 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:21.517 Compiler for C supports arguments -Wformat-security: NO 00:05:21.517 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:21.517 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:21.517 Compiler for C supports arguments -Wnested-externs: YES 00:05:21.517 Compiler for C supports arguments -Wold-style-definition: YES 00:05:21.517 Compiler for C supports arguments -Wpointer-arith: YES 00:05:21.517 Compiler for C supports arguments -Wsign-compare: YES 00:05:21.517 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:21.517 Compiler for C supports arguments -Wundef: YES 00:05:21.517 Compiler for C supports arguments -Wwrite-strings: YES 00:05:21.517 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:21.517 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:21.517 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:21.518 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:21.518 Program objdump found: YES (/usr/bin/objdump) 00:05:21.518 Compiler for C supports arguments -mavx512f: YES 00:05:21.518 Checking if "AVX512 checking" compiles: YES 00:05:21.518 Fetching value of define "__SSE4_2__" : 1 00:05:21.518 Fetching value of define "__AES__" : 1 00:05:21.518 Fetching value of define "__AVX__" : 1 00:05:21.518 Fetching value of define "__AVX2__" : 1 00:05:21.518 Fetching value of define "__AVX512BW__" : 1 00:05:21.518 Fetching value of define "__AVX512CD__" : 1 00:05:21.518 Fetching value of define "__AVX512DQ__" : 1 00:05:21.518 Fetching value of define "__AVX512F__" : 1 00:05:21.518 Fetching value of define "__AVX512VL__" : 1 00:05:21.518 Fetching value of define "__PCLMUL__" : 1 00:05:21.518 Fetching value of define "__RDRND__" : 1 00:05:21.518 Fetching value of define "__RDSEED__" : 1 00:05:21.518 Fetching value of define "__VPCLMULQDQ__" : 1 00:05:21.518 Fetching value of define "__znver1__" : (undefined) 00:05:21.518 Fetching value of define "__znver2__" : (undefined) 00:05:21.518 Fetching value of define "__znver3__" : (undefined) 00:05:21.518 Fetching value of define "__znver4__" : (undefined) 00:05:21.518 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:21.518 Message: lib/log: Defining dependency "log" 00:05:21.518 Message: lib/kvargs: Defining dependency "kvargs" 00:05:21.518 Message: lib/telemetry: Defining dependency "telemetry" 00:05:21.518 Checking for function "getentropy" : NO 00:05:21.518 Message: lib/eal: Defining dependency "eal" 00:05:21.518 Message: lib/ring: Defining dependency "ring" 00:05:21.518 Message: lib/rcu: Defining dependency "rcu" 00:05:21.518 Message: lib/mempool: Defining dependency "mempool" 00:05:21.518 Message: lib/mbuf: Defining dependency "mbuf" 00:05:21.518 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:21.518 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:21.518 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:21.518 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:21.518 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:21.518 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:05:21.518 Compiler for C supports arguments -mpclmul: YES 00:05:21.518 Compiler for C supports arguments -maes: YES 00:05:21.518 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:21.518 Compiler for C supports arguments -mavx512bw: YES 00:05:21.518 Compiler for C supports arguments -mavx512dq: YES 00:05:21.518 Compiler for C supports arguments -mavx512vl: YES 00:05:21.518 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:21.518 Compiler for C supports arguments -mavx2: YES 00:05:21.518 Compiler for C supports arguments -mavx: YES 00:05:21.518 Message: lib/net: Defining dependency "net" 00:05:21.518 Message: lib/meter: Defining dependency "meter" 00:05:21.518 Message: lib/ethdev: Defining dependency "ethdev" 00:05:21.518 Message: lib/pci: Defining dependency "pci" 00:05:21.518 Message: lib/cmdline: Defining dependency "cmdline" 00:05:21.518 Message: lib/hash: Defining dependency "hash" 00:05:21.518 Message: lib/timer: Defining dependency "timer" 00:05:21.518 Message: lib/compressdev: Defining dependency "compressdev" 00:05:21.518 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:21.518 Message: lib/dmadev: Defining dependency "dmadev" 00:05:21.518 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:21.518 Message: lib/power: Defining dependency "power" 00:05:21.518 Message: lib/reorder: Defining dependency "reorder" 00:05:21.518 Message: lib/security: Defining dependency "security" 00:05:21.518 Has header "linux/userfaultfd.h" : YES 00:05:21.518 Has header "linux/vduse.h" : YES 00:05:21.518 Message: lib/vhost: Defining dependency "vhost" 00:05:21.518 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:21.518 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:21.518 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:21.518 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:21.518 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:21.518 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:21.518 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:21.518 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:21.518 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:21.518 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:21.518 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:21.518 Configuring doxy-api-html.conf using configuration 00:05:21.518 Configuring doxy-api-man.conf using configuration 00:05:21.518 Program mandb found: YES (/usr/bin/mandb) 00:05:21.518 Program sphinx-build found: NO 00:05:21.518 Configuring rte_build_config.h using configuration 00:05:21.518 Message: 00:05:21.518 ================= 00:05:21.518 Applications Enabled 00:05:21.518 ================= 00:05:21.518 00:05:21.518 apps: 00:05:21.518 00:05:21.518 00:05:21.518 Message: 00:05:21.518 ================= 00:05:21.518 Libraries Enabled 00:05:21.518 ================= 00:05:21.518 00:05:21.518 libs: 00:05:21.518 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:21.518 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:21.518 cryptodev, dmadev, power, reorder, security, vhost, 00:05:21.518 00:05:21.518 Message: 00:05:21.518 =============== 00:05:21.518 Drivers Enabled 00:05:21.518 =============== 00:05:21.518 00:05:21.518 common: 00:05:21.518 00:05:21.518 bus: 00:05:21.518 pci, vdev, 00:05:21.518 mempool: 00:05:21.518 ring, 00:05:21.518 dma: 00:05:21.518 00:05:21.518 net: 00:05:21.518 00:05:21.518 crypto: 00:05:21.518 00:05:21.518 compress: 00:05:21.518 00:05:21.518 vdpa: 00:05:21.518 00:05:21.518 00:05:21.518 Message: 00:05:21.518 ================= 00:05:21.518 Content Skipped 00:05:21.518 ================= 00:05:21.518 00:05:21.518 apps: 00:05:21.518 dumpcap: explicitly disabled via build config 00:05:21.518 graph: explicitly disabled via build config 00:05:21.518 pdump: explicitly disabled via build config 00:05:21.518 proc-info: explicitly disabled via build config 00:05:21.518 test-acl: explicitly disabled via build config 00:05:21.518 test-bbdev: explicitly disabled via build config 00:05:21.518 test-cmdline: explicitly disabled via build config 00:05:21.518 test-compress-perf: explicitly disabled via build config 00:05:21.518 test-crypto-perf: explicitly disabled via build config 00:05:21.518 test-dma-perf: explicitly disabled via build config 00:05:21.518 test-eventdev: explicitly disabled via build config 00:05:21.518 test-fib: explicitly disabled via build config 00:05:21.518 test-flow-perf: explicitly disabled via build config 00:05:21.518 test-gpudev: explicitly disabled via build config 00:05:21.518 test-mldev: explicitly disabled via build config 00:05:21.518 test-pipeline: explicitly disabled via build config 00:05:21.518 test-pmd: explicitly disabled via build config 00:05:21.518 test-regex: explicitly disabled via build config 00:05:21.518 test-sad: explicitly disabled via build config 00:05:21.518 test-security-perf: explicitly disabled via build config 00:05:21.518 00:05:21.518 libs: 00:05:21.518 argparse: explicitly disabled via build config 00:05:21.518 metrics: explicitly disabled via build config 00:05:21.518 acl: explicitly disabled via build config 00:05:21.518 bbdev: explicitly disabled via build config 00:05:21.518 bitratestats: explicitly disabled via build config 00:05:21.518 bpf: explicitly disabled via build config 00:05:21.518 cfgfile: explicitly disabled via build config 00:05:21.518 distributor: explicitly disabled via build config 00:05:21.518 efd: explicitly disabled via build config 00:05:21.518 eventdev: explicitly disabled via build config 00:05:21.518 dispatcher: explicitly disabled via build config 00:05:21.518 gpudev: explicitly disabled via build config 00:05:21.518 gro: explicitly disabled via build config 00:05:21.518 gso: explicitly disabled via build config 00:05:21.518 ip_frag: explicitly disabled via build config 00:05:21.518 jobstats: explicitly disabled via build config 00:05:21.518 latencystats: explicitly disabled via build config 00:05:21.518 lpm: explicitly disabled via build config 00:05:21.518 member: explicitly disabled via build config 00:05:21.518 pcapng: explicitly disabled via build config 00:05:21.518 rawdev: explicitly disabled via build config 00:05:21.518 regexdev: explicitly disabled via build config 00:05:21.518 mldev: explicitly disabled via build config 00:05:21.518 rib: explicitly disabled via build config 00:05:21.518 sched: explicitly disabled via build config 00:05:21.518 stack: explicitly disabled via build config 00:05:21.518 ipsec: explicitly disabled via build config 00:05:21.518 pdcp: explicitly disabled via build config 00:05:21.518 fib: explicitly disabled via build config 00:05:21.518 port: explicitly disabled via build config 00:05:21.518 pdump: explicitly disabled via build config 00:05:21.518 table: explicitly disabled via build config 00:05:21.519 pipeline: explicitly disabled via build config 00:05:21.519 graph: explicitly disabled via build config 00:05:21.519 node: explicitly disabled via build config 00:05:21.519 00:05:21.519 drivers: 00:05:21.519 common/cpt: not in enabled drivers build config 00:05:21.519 common/dpaax: not in enabled drivers build config 00:05:21.519 common/iavf: not in enabled drivers build config 00:05:21.519 common/idpf: not in enabled drivers build config 00:05:21.519 common/ionic: not in enabled drivers build config 00:05:21.519 common/mvep: not in enabled drivers build config 00:05:21.519 common/octeontx: not in enabled drivers build config 00:05:21.519 bus/auxiliary: not in enabled drivers build config 00:05:21.519 bus/cdx: not in enabled drivers build config 00:05:21.519 bus/dpaa: not in enabled drivers build config 00:05:21.519 bus/fslmc: not in enabled drivers build config 00:05:21.519 bus/ifpga: not in enabled drivers build config 00:05:21.519 bus/platform: not in enabled drivers build config 00:05:21.519 bus/uacce: not in enabled drivers build config 00:05:21.519 bus/vmbus: not in enabled drivers build config 00:05:21.519 common/cnxk: not in enabled drivers build config 00:05:21.519 common/mlx5: not in enabled drivers build config 00:05:21.519 common/nfp: not in enabled drivers build config 00:05:21.519 common/nitrox: not in enabled drivers build config 00:05:21.519 common/qat: not in enabled drivers build config 00:05:21.519 common/sfc_efx: not in enabled drivers build config 00:05:21.519 mempool/bucket: not in enabled drivers build config 00:05:21.519 mempool/cnxk: not in enabled drivers build config 00:05:21.519 mempool/dpaa: not in enabled drivers build config 00:05:21.519 mempool/dpaa2: not in enabled drivers build config 00:05:21.519 mempool/octeontx: not in enabled drivers build config 00:05:21.519 mempool/stack: not in enabled drivers build config 00:05:21.519 dma/cnxk: not in enabled drivers build config 00:05:21.519 dma/dpaa: not in enabled drivers build config 00:05:21.519 dma/dpaa2: not in enabled drivers build config 00:05:21.519 dma/hisilicon: not in enabled drivers build config 00:05:21.519 dma/idxd: not in enabled drivers build config 00:05:21.519 dma/ioat: not in enabled drivers build config 00:05:21.519 dma/skeleton: not in enabled drivers build config 00:05:21.519 net/af_packet: not in enabled drivers build config 00:05:21.519 net/af_xdp: not in enabled drivers build config 00:05:21.519 net/ark: not in enabled drivers build config 00:05:21.519 net/atlantic: not in enabled drivers build config 00:05:21.519 net/avp: not in enabled drivers build config 00:05:21.519 net/axgbe: not in enabled drivers build config 00:05:21.519 net/bnx2x: not in enabled drivers build config 00:05:21.519 net/bnxt: not in enabled drivers build config 00:05:21.519 net/bonding: not in enabled drivers build config 00:05:21.519 net/cnxk: not in enabled drivers build config 00:05:21.519 net/cpfl: not in enabled drivers build config 00:05:21.519 net/cxgbe: not in enabled drivers build config 00:05:21.519 net/dpaa: not in enabled drivers build config 00:05:21.519 net/dpaa2: not in enabled drivers build config 00:05:21.519 net/e1000: not in enabled drivers build config 00:05:21.519 net/ena: not in enabled drivers build config 00:05:21.519 net/enetc: not in enabled drivers build config 00:05:21.519 net/enetfec: not in enabled drivers build config 00:05:21.519 net/enic: not in enabled drivers build config 00:05:21.519 net/failsafe: not in enabled drivers build config 00:05:21.519 net/fm10k: not in enabled drivers build config 00:05:21.519 net/gve: not in enabled drivers build config 00:05:21.519 net/hinic: not in enabled drivers build config 00:05:21.519 net/hns3: not in enabled drivers build config 00:05:21.519 net/i40e: not in enabled drivers build config 00:05:21.519 net/iavf: not in enabled drivers build config 00:05:21.519 net/ice: not in enabled drivers build config 00:05:21.519 net/idpf: not in enabled drivers build config 00:05:21.519 net/igc: not in enabled drivers build config 00:05:21.519 net/ionic: not in enabled drivers build config 00:05:21.519 net/ipn3ke: not in enabled drivers build config 00:05:21.519 net/ixgbe: not in enabled drivers build config 00:05:21.519 net/mana: not in enabled drivers build config 00:05:21.519 net/memif: not in enabled drivers build config 00:05:21.519 net/mlx4: not in enabled drivers build config 00:05:21.519 net/mlx5: not in enabled drivers build config 00:05:21.519 net/mvneta: not in enabled drivers build config 00:05:21.519 net/mvpp2: not in enabled drivers build config 00:05:21.519 net/netvsc: not in enabled drivers build config 00:05:21.519 net/nfb: not in enabled drivers build config 00:05:21.519 net/nfp: not in enabled drivers build config 00:05:21.519 net/ngbe: not in enabled drivers build config 00:05:21.519 net/null: not in enabled drivers build config 00:05:21.519 net/octeontx: not in enabled drivers build config 00:05:21.519 net/octeon_ep: not in enabled drivers build config 00:05:21.519 net/pcap: not in enabled drivers build config 00:05:21.519 net/pfe: not in enabled drivers build config 00:05:21.519 net/qede: not in enabled drivers build config 00:05:21.519 net/ring: not in enabled drivers build config 00:05:21.519 net/sfc: not in enabled drivers build config 00:05:21.519 net/softnic: not in enabled drivers build config 00:05:21.519 net/tap: not in enabled drivers build config 00:05:21.519 net/thunderx: not in enabled drivers build config 00:05:21.519 net/txgbe: not in enabled drivers build config 00:05:21.519 net/vdev_netvsc: not in enabled drivers build config 00:05:21.519 net/vhost: not in enabled drivers build config 00:05:21.519 net/virtio: not in enabled drivers build config 00:05:21.519 net/vmxnet3: not in enabled drivers build config 00:05:21.519 raw/*: missing internal dependency, "rawdev" 00:05:21.519 crypto/armv8: not in enabled drivers build config 00:05:21.519 crypto/bcmfs: not in enabled drivers build config 00:05:21.519 crypto/caam_jr: not in enabled drivers build config 00:05:21.519 crypto/ccp: not in enabled drivers build config 00:05:21.519 crypto/cnxk: not in enabled drivers build config 00:05:21.519 crypto/dpaa_sec: not in enabled drivers build config 00:05:21.519 crypto/dpaa2_sec: not in enabled drivers build config 00:05:21.519 crypto/ipsec_mb: not in enabled drivers build config 00:05:21.519 crypto/mlx5: not in enabled drivers build config 00:05:21.519 crypto/mvsam: not in enabled drivers build config 00:05:21.519 crypto/nitrox: not in enabled drivers build config 00:05:21.519 crypto/null: not in enabled drivers build config 00:05:21.519 crypto/octeontx: not in enabled drivers build config 00:05:21.519 crypto/openssl: not in enabled drivers build config 00:05:21.519 crypto/scheduler: not in enabled drivers build config 00:05:21.519 crypto/uadk: not in enabled drivers build config 00:05:21.519 crypto/virtio: not in enabled drivers build config 00:05:21.519 compress/isal: not in enabled drivers build config 00:05:21.519 compress/mlx5: not in enabled drivers build config 00:05:21.519 compress/nitrox: not in enabled drivers build config 00:05:21.519 compress/octeontx: not in enabled drivers build config 00:05:21.519 compress/zlib: not in enabled drivers build config 00:05:21.519 regex/*: missing internal dependency, "regexdev" 00:05:21.519 ml/*: missing internal dependency, "mldev" 00:05:21.519 vdpa/ifc: not in enabled drivers build config 00:05:21.519 vdpa/mlx5: not in enabled drivers build config 00:05:21.519 vdpa/nfp: not in enabled drivers build config 00:05:21.519 vdpa/sfc: not in enabled drivers build config 00:05:21.519 event/*: missing internal dependency, "eventdev" 00:05:21.519 baseband/*: missing internal dependency, "bbdev" 00:05:21.519 gpu/*: missing internal dependency, "gpudev" 00:05:21.519 00:05:21.519 00:05:21.519 Build targets in project: 84 00:05:21.519 00:05:21.519 DPDK 24.03.0 00:05:21.519 00:05:21.519 User defined options 00:05:21.519 buildtype : debug 00:05:21.519 default_library : shared 00:05:21.519 libdir : lib 00:05:21.519 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:21.519 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:21.519 c_link_args : 00:05:21.519 cpu_instruction_set: native 00:05:21.519 disable_apps : pdump,dumpcap,test-cmdline,test-pmd,test-crypto-perf,test-gpudev,proc-info,graph,test-flow-perf,test-compress-perf,test-fib,test-regex,test-eventdev,test-security-perf,test,test-dma-perf,test-acl,test-pipeline,test-bbdev,test-sad,test-mldev 00:05:21.519 disable_libs : pdump,gpudev,rawdev,pcapng,node,metrics,bitratestats,member,pdcp,eventdev,lpm,table,distributor,regexdev,bpf,acl,stack,ipsec,graph,pipeline,gso,latencystats,jobstats,port,cfgfile,dispatcher,sched,bbdev,gro,rib,argparse,fib,efd,mldev,ip_frag 00:05:21.519 enable_docs : false 00:05:21.519 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:21.519 enable_kmods : false 00:05:21.519 max_lcores : 128 00:05:21.519 tests : false 00:05:21.519 00:05:21.519 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:21.519 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:21.519 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:21.519 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:21.519 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:21.519 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:21.519 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:21.519 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:21.519 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:21.519 [8/267] Linking static target lib/librte_kvargs.a 00:05:21.519 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:21.781 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:21.781 [11/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:21.781 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:21.781 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:21.781 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:21.781 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:21.781 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:21.781 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:21.781 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:21.781 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:21.781 [20/267] Linking static target lib/librte_log.a 00:05:21.781 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:21.781 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:21.781 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:21.781 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:21.781 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:21.781 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:21.781 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:21.781 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:21.781 [29/267] Linking static target lib/librte_pci.a 00:05:21.781 [30/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:21.781 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:21.781 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:21.781 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:21.781 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:22.041 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:22.041 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:22.041 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:22.041 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:22.041 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:22.041 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.041 [41/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:22.041 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:22.041 [43/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.041 [44/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:22.041 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:22.041 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:22.041 [47/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:22.041 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:22.041 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:22.041 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:22.041 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:22.041 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:22.041 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:22.041 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:22.041 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:22.041 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:22.304 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:22.304 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:22.304 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:22.304 [60/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:22.304 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:22.304 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:22.304 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:22.304 [64/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:22.304 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:22.304 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:22.304 [67/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:22.304 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:22.304 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:22.304 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:22.304 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:22.304 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:22.304 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:22.304 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:22.304 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:22.304 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:22.304 [77/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:22.304 [78/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:22.304 [79/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:22.304 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:22.304 [81/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:22.304 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:22.304 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:22.304 [84/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:22.304 [85/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:22.304 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:22.304 [87/267] Linking static target lib/librte_ring.a 00:05:22.304 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:22.304 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:22.304 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:22.304 [91/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:22.304 [92/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:22.304 [93/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:22.304 [94/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:22.304 [95/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:22.304 [96/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:22.304 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:22.304 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:22.304 [99/267] Linking static target lib/librte_telemetry.a 00:05:22.304 [100/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:22.304 [101/267] Linking static target lib/librte_dmadev.a 00:05:22.304 [102/267] Linking static target lib/librte_timer.a 00:05:22.304 [103/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:22.304 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:22.304 [105/267] Linking static target lib/librte_meter.a 00:05:22.304 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:22.304 [107/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:22.304 [108/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:22.304 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:22.304 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:22.304 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:22.304 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:22.304 [113/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:22.304 [114/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:05:22.304 [115/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:22.304 [116/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:22.304 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:22.304 [118/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:22.304 [119/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:22.304 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:22.304 [121/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:22.304 [122/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:22.304 [123/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:22.304 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:22.305 [125/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:22.305 [126/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:22.305 [127/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:22.305 [128/267] Linking static target lib/librte_cmdline.a 00:05:22.305 [129/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:22.305 [130/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:22.305 [131/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:22.305 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:22.305 [133/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:22.305 [134/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:22.305 [135/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:22.305 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:22.305 [137/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:22.305 [138/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:22.305 [139/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:22.305 [140/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:22.305 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:22.305 [142/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:22.305 [143/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:22.305 [144/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:22.305 [145/267] Linking static target lib/librte_net.a 00:05:22.305 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:22.305 [147/267] Linking static target lib/librte_reorder.a 00:05:22.305 [148/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:22.305 [149/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:22.305 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:22.305 [151/267] Linking static target lib/librte_rcu.a 00:05:22.305 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:22.305 [153/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.305 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:22.305 [155/267] Linking static target lib/librte_compressdev.a 00:05:22.305 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:22.305 [157/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:22.305 [158/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:22.305 [159/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:22.305 [160/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:22.305 [161/267] Linking target lib/librte_log.so.24.1 00:05:22.305 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:22.305 [163/267] Linking static target lib/librte_eal.a 00:05:22.305 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:22.305 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:22.305 [166/267] Linking static target lib/librte_mempool.a 00:05:22.305 [167/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:22.305 [168/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:22.305 [169/267] Linking static target lib/librte_power.a 00:05:22.565 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:22.565 [171/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:22.565 [172/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:22.565 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:22.565 [174/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:22.565 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:22.565 [176/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:22.565 [177/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:22.565 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:22.565 [179/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:22.565 [180/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:22.565 [181/267] Linking static target drivers/librte_bus_vdev.a 00:05:22.565 [182/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:22.565 [183/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:22.565 [184/267] Linking static target lib/librte_security.a 00:05:22.565 [185/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.565 [186/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:22.565 [187/267] Linking static target lib/librte_mbuf.a 00:05:22.565 [188/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.565 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:22.565 [190/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:22.565 [191/267] Linking static target lib/librte_hash.a 00:05:22.565 [192/267] Linking target lib/librte_kvargs.so.24.1 00:05:22.565 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:22.565 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:22.565 [195/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:22.565 [196/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:22.565 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:22.565 [198/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:22.565 [199/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.826 [200/267] Linking static target drivers/librte_mempool_ring.a 00:05:22.826 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:22.826 [202/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:22.826 [203/267] Linking static target drivers/librte_bus_pci.a 00:05:22.826 [204/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:22.826 [205/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.826 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:22.826 [207/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.826 [208/267] Linking static target lib/librte_cryptodev.a 00:05:22.826 [209/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.826 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.826 [211/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:22.826 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.826 [213/267] Linking target lib/librte_telemetry.so.24.1 00:05:22.826 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.087 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:23.087 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.087 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.348 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:23.348 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:23.348 [220/267] Linking static target lib/librte_ethdev.a 00:05:23.348 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.348 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.348 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.610 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.610 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.610 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.182 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:24.182 [228/267] Linking static target lib/librte_vhost.a 00:05:25.127 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.241 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.840 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.411 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.411 [233/267] Linking target lib/librte_eal.so.24.1 00:05:33.672 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:33.672 [235/267] Linking target lib/librte_timer.so.24.1 00:05:33.672 [236/267] Linking target lib/librte_meter.so.24.1 00:05:33.672 [237/267] Linking target lib/librte_ring.so.24.1 00:05:33.672 [238/267] Linking target drivers/librte_bus_vdev.so.24.1 00:05:33.672 [239/267] Linking target lib/librte_pci.so.24.1 00:05:33.672 [240/267] Linking target lib/librte_dmadev.so.24.1 00:05:33.932 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:33.932 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:33.932 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:33.932 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:33.932 [245/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:33.932 [246/267] Linking target lib/librte_rcu.so.24.1 00:05:33.932 [247/267] Linking target lib/librte_mempool.so.24.1 00:05:33.932 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:05:33.932 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:33.932 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:33.932 [251/267] Linking target lib/librte_mbuf.so.24.1 00:05:33.932 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:05:34.193 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:34.193 [254/267] Linking target lib/librte_compressdev.so.24.1 00:05:34.193 [255/267] Linking target lib/librte_reorder.so.24.1 00:05:34.193 [256/267] Linking target lib/librte_net.so.24.1 00:05:34.193 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:05:34.453 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:34.453 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:34.453 [260/267] Linking target lib/librte_cmdline.so.24.1 00:05:34.453 [261/267] Linking target lib/librte_hash.so.24.1 00:05:34.453 [262/267] Linking target lib/librte_security.so.24.1 00:05:34.453 [263/267] Linking target lib/librte_ethdev.so.24.1 00:05:34.453 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:34.453 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:34.714 [266/267] Linking target lib/librte_vhost.so.24.1 00:05:34.714 [267/267] Linking target lib/librte_power.so.24.1 00:05:34.714 INFO: autodetecting backend as ninja 00:05:34.714 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:05:38.922 CC lib/ut/ut.o 00:05:38.922 CC lib/ut_mock/mock.o 00:05:38.922 CC lib/log/log.o 00:05:38.922 CC lib/log/log_deprecated.o 00:05:38.922 CC lib/log/log_flags.o 00:05:38.922 LIB libspdk_ut.a 00:05:38.922 SO libspdk_ut.so.2.0 00:05:38.922 LIB libspdk_ut_mock.a 00:05:38.922 LIB libspdk_log.a 00:05:38.922 SO libspdk_ut_mock.so.6.0 00:05:38.922 SO libspdk_log.so.7.1 00:05:38.922 SYMLINK libspdk_ut.so 00:05:38.922 SYMLINK libspdk_ut_mock.so 00:05:38.922 SYMLINK libspdk_log.so 00:05:39.495 CXX lib/trace_parser/trace.o 00:05:39.495 CC lib/dma/dma.o 00:05:39.495 CC lib/ioat/ioat.o 00:05:39.495 CC lib/util/base64.o 00:05:39.495 CC lib/util/bit_array.o 00:05:39.495 CC lib/util/cpuset.o 00:05:39.495 CC lib/util/crc16.o 00:05:39.495 CC lib/util/crc32.o 00:05:39.495 CC lib/util/crc32c.o 00:05:39.495 CC lib/util/dif.o 00:05:39.495 CC lib/util/crc32_ieee.o 00:05:39.495 CC lib/util/crc64.o 00:05:39.495 CC lib/util/fd.o 00:05:39.495 CC lib/util/fd_group.o 00:05:39.495 CC lib/util/file.o 00:05:39.495 CC lib/util/hexlify.o 00:05:39.495 CC lib/util/iov.o 00:05:39.495 CC lib/util/math.o 00:05:39.495 CC lib/util/net.o 00:05:39.495 CC lib/util/pipe.o 00:05:39.495 CC lib/util/strerror_tls.o 00:05:39.495 CC lib/util/string.o 00:05:39.495 CC lib/util/uuid.o 00:05:39.495 CC lib/util/xor.o 00:05:39.495 CC lib/util/zipf.o 00:05:39.495 CC lib/util/md5.o 00:05:39.495 CC lib/vfio_user/host/vfio_user.o 00:05:39.495 CC lib/vfio_user/host/vfio_user_pci.o 00:05:39.755 LIB libspdk_dma.a 00:05:39.755 SO libspdk_dma.so.5.0 00:05:39.755 LIB libspdk_ioat.a 00:05:39.755 SO libspdk_ioat.so.7.0 00:05:39.755 SYMLINK libspdk_dma.so 00:05:39.755 SYMLINK libspdk_ioat.so 00:05:39.755 LIB libspdk_vfio_user.a 00:05:39.755 SO libspdk_vfio_user.so.5.0 00:05:40.015 LIB libspdk_util.a 00:05:40.015 SYMLINK libspdk_vfio_user.so 00:05:40.015 SO libspdk_util.so.10.1 00:05:40.015 SYMLINK libspdk_util.so 00:05:40.277 LIB libspdk_trace_parser.a 00:05:40.277 SO libspdk_trace_parser.so.6.0 00:05:40.277 SYMLINK libspdk_trace_parser.so 00:05:40.536 CC lib/idxd/idxd.o 00:05:40.536 CC lib/idxd/idxd_user.o 00:05:40.536 CC lib/idxd/idxd_kernel.o 00:05:40.536 CC lib/rdma_utils/rdma_utils.o 00:05:40.536 CC lib/env_dpdk/env.o 00:05:40.536 CC lib/env_dpdk/memory.o 00:05:40.536 CC lib/env_dpdk/pci.o 00:05:40.536 CC lib/env_dpdk/init.o 00:05:40.536 CC lib/env_dpdk/threads.o 00:05:40.536 CC lib/env_dpdk/pci_ioat.o 00:05:40.536 CC lib/env_dpdk/pci_virtio.o 00:05:40.536 CC lib/env_dpdk/pci_vmd.o 00:05:40.536 CC lib/env_dpdk/pci_idxd.o 00:05:40.536 CC lib/env_dpdk/pci_event.o 00:05:40.536 CC lib/conf/conf.o 00:05:40.536 CC lib/json/json_parse.o 00:05:40.536 CC lib/json/json_util.o 00:05:40.536 CC lib/env_dpdk/sigbus_handler.o 00:05:40.536 CC lib/vmd/vmd.o 00:05:40.536 CC lib/json/json_write.o 00:05:40.536 CC lib/vmd/led.o 00:05:40.536 CC lib/env_dpdk/pci_dpdk.o 00:05:40.536 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:40.536 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:40.797 LIB libspdk_conf.a 00:05:40.797 LIB libspdk_rdma_utils.a 00:05:40.797 LIB libspdk_json.a 00:05:40.797 SO libspdk_conf.so.6.0 00:05:40.797 SO libspdk_rdma_utils.so.1.0 00:05:40.797 SO libspdk_json.so.6.0 00:05:40.797 SYMLINK libspdk_conf.so 00:05:40.797 SYMLINK libspdk_rdma_utils.so 00:05:40.797 SYMLINK libspdk_json.so 00:05:41.059 LIB libspdk_idxd.a 00:05:41.059 SO libspdk_idxd.so.12.1 00:05:41.059 LIB libspdk_vmd.a 00:05:41.059 SYMLINK libspdk_idxd.so 00:05:41.059 SO libspdk_vmd.so.6.0 00:05:41.059 SYMLINK libspdk_vmd.so 00:05:41.321 CC lib/rdma_provider/common.o 00:05:41.321 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:41.321 CC lib/jsonrpc/jsonrpc_server.o 00:05:41.321 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:41.321 CC lib/jsonrpc/jsonrpc_client.o 00:05:41.321 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:41.321 LIB libspdk_rdma_provider.a 00:05:41.582 SO libspdk_rdma_provider.so.7.0 00:05:41.582 LIB libspdk_jsonrpc.a 00:05:41.582 SYMLINK libspdk_rdma_provider.so 00:05:41.582 SO libspdk_jsonrpc.so.6.0 00:05:41.582 SYMLINK libspdk_jsonrpc.so 00:05:41.582 LIB libspdk_env_dpdk.a 00:05:41.844 SO libspdk_env_dpdk.so.15.1 00:05:41.844 SYMLINK libspdk_env_dpdk.so 00:05:42.105 CC lib/rpc/rpc.o 00:05:42.105 LIB libspdk_rpc.a 00:05:42.105 SO libspdk_rpc.so.6.0 00:05:42.366 SYMLINK libspdk_rpc.so 00:05:42.627 CC lib/keyring/keyring.o 00:05:42.627 CC lib/keyring/keyring_rpc.o 00:05:42.627 CC lib/trace/trace.o 00:05:42.627 CC lib/notify/notify.o 00:05:42.627 CC lib/trace/trace_flags.o 00:05:42.627 CC lib/notify/notify_rpc.o 00:05:42.627 CC lib/trace/trace_rpc.o 00:05:42.888 LIB libspdk_notify.a 00:05:42.888 LIB libspdk_trace.a 00:05:42.888 SO libspdk_notify.so.6.0 00:05:42.888 LIB libspdk_keyring.a 00:05:42.888 SO libspdk_trace.so.11.0 00:05:42.888 SO libspdk_keyring.so.2.0 00:05:42.888 SYMLINK libspdk_notify.so 00:05:42.888 SYMLINK libspdk_keyring.so 00:05:42.888 SYMLINK libspdk_trace.so 00:05:43.149 CC lib/thread/thread.o 00:05:43.149 CC lib/thread/iobuf.o 00:05:43.149 CC lib/sock/sock.o 00:05:43.149 CC lib/sock/sock_rpc.o 00:05:43.722 LIB libspdk_sock.a 00:05:43.722 SO libspdk_sock.so.10.0 00:05:43.722 SYMLINK libspdk_sock.so 00:05:44.293 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:44.293 CC lib/nvme/nvme_ctrlr.o 00:05:44.293 CC lib/nvme/nvme_fabric.o 00:05:44.293 CC lib/nvme/nvme_ns_cmd.o 00:05:44.293 CC lib/nvme/nvme_ns.o 00:05:44.293 CC lib/nvme/nvme_pcie_common.o 00:05:44.293 CC lib/nvme/nvme_pcie.o 00:05:44.293 CC lib/nvme/nvme_qpair.o 00:05:44.293 CC lib/nvme/nvme.o 00:05:44.293 CC lib/nvme/nvme_quirks.o 00:05:44.293 CC lib/nvme/nvme_transport.o 00:05:44.293 CC lib/nvme/nvme_discovery.o 00:05:44.293 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:44.293 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:44.293 CC lib/nvme/nvme_tcp.o 00:05:44.293 CC lib/nvme/nvme_opal.o 00:05:44.293 CC lib/nvme/nvme_io_msg.o 00:05:44.293 CC lib/nvme/nvme_poll_group.o 00:05:44.293 CC lib/nvme/nvme_zns.o 00:05:44.293 CC lib/nvme/nvme_stubs.o 00:05:44.293 CC lib/nvme/nvme_auth.o 00:05:44.293 CC lib/nvme/nvme_cuse.o 00:05:44.293 CC lib/nvme/nvme_vfio_user.o 00:05:44.293 CC lib/nvme/nvme_rdma.o 00:05:44.555 LIB libspdk_thread.a 00:05:44.555 SO libspdk_thread.so.11.0 00:05:44.555 SYMLINK libspdk_thread.so 00:05:45.127 CC lib/init/subsystem.o 00:05:45.127 CC lib/init/json_config.o 00:05:45.127 CC lib/init/rpc.o 00:05:45.127 CC lib/init/subsystem_rpc.o 00:05:45.127 CC lib/blob/request.o 00:05:45.127 CC lib/blob/blobstore.o 00:05:45.127 CC lib/blob/zeroes.o 00:05:45.127 CC lib/blob/blob_bs_dev.o 00:05:45.127 CC lib/fsdev/fsdev.o 00:05:45.127 CC lib/vfu_tgt/tgt_endpoint.o 00:05:45.127 CC lib/fsdev/fsdev_io.o 00:05:45.127 CC lib/fsdev/fsdev_rpc.o 00:05:45.127 CC lib/vfu_tgt/tgt_rpc.o 00:05:45.127 CC lib/virtio/virtio.o 00:05:45.127 CC lib/virtio/virtio_vhost_user.o 00:05:45.127 CC lib/virtio/virtio_vfio_user.o 00:05:45.127 CC lib/virtio/virtio_pci.o 00:05:45.127 CC lib/accel/accel.o 00:05:45.127 CC lib/accel/accel_rpc.o 00:05:45.127 CC lib/accel/accel_sw.o 00:05:45.127 LIB libspdk_init.a 00:05:45.387 SO libspdk_init.so.6.0 00:05:45.387 LIB libspdk_virtio.a 00:05:45.387 LIB libspdk_vfu_tgt.a 00:05:45.387 SYMLINK libspdk_init.so 00:05:45.387 SO libspdk_vfu_tgt.so.3.0 00:05:45.387 SO libspdk_virtio.so.7.0 00:05:45.387 SYMLINK libspdk_vfu_tgt.so 00:05:45.387 SYMLINK libspdk_virtio.so 00:05:45.648 LIB libspdk_fsdev.a 00:05:45.648 SO libspdk_fsdev.so.2.0 00:05:45.648 CC lib/event/app.o 00:05:45.648 CC lib/event/reactor.o 00:05:45.648 CC lib/event/log_rpc.o 00:05:45.648 CC lib/event/app_rpc.o 00:05:45.648 CC lib/event/scheduler_static.o 00:05:45.648 SYMLINK libspdk_fsdev.so 00:05:45.909 LIB libspdk_accel.a 00:05:45.909 SO libspdk_accel.so.16.0 00:05:46.170 LIB libspdk_nvme.a 00:05:46.171 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:46.171 SYMLINK libspdk_accel.so 00:05:46.171 LIB libspdk_event.a 00:05:46.171 SO libspdk_event.so.14.0 00:05:46.171 SO libspdk_nvme.so.15.0 00:05:46.171 SYMLINK libspdk_event.so 00:05:46.431 SYMLINK libspdk_nvme.so 00:05:46.431 CC lib/bdev/bdev.o 00:05:46.431 CC lib/bdev/bdev_rpc.o 00:05:46.431 CC lib/bdev/bdev_zone.o 00:05:46.431 CC lib/bdev/part.o 00:05:46.431 CC lib/bdev/scsi_nvme.o 00:05:46.692 LIB libspdk_fuse_dispatcher.a 00:05:46.692 SO libspdk_fuse_dispatcher.so.1.0 00:05:46.692 SYMLINK libspdk_fuse_dispatcher.so 00:05:47.635 LIB libspdk_blob.a 00:05:47.635 SO libspdk_blob.so.11.0 00:05:47.635 SYMLINK libspdk_blob.so 00:05:47.896 LIB libspdk_bdev.a 00:05:47.896 SO libspdk_bdev.so.17.0 00:05:47.896 CC lib/blobfs/blobfs.o 00:05:47.896 CC lib/lvol/lvol.o 00:05:47.896 CC lib/blobfs/tree.o 00:05:48.158 SYMLINK libspdk_bdev.so 00:05:48.419 CC lib/scsi/dev.o 00:05:48.419 CC lib/scsi/lun.o 00:05:48.419 CC lib/scsi/port.o 00:05:48.419 CC lib/scsi/scsi.o 00:05:48.419 CC lib/scsi/scsi_bdev.o 00:05:48.419 CC lib/scsi/scsi_pr.o 00:05:48.419 CC lib/scsi/scsi_rpc.o 00:05:48.419 CC lib/scsi/task.o 00:05:48.419 CC lib/ftl/ftl_core.o 00:05:48.419 CC lib/ftl/ftl_init.o 00:05:48.419 CC lib/ftl/ftl_layout.o 00:05:48.419 CC lib/ftl/ftl_debug.o 00:05:48.419 CC lib/ftl/ftl_io.o 00:05:48.419 CC lib/nbd/nbd.o 00:05:48.419 CC lib/ftl/ftl_sb.o 00:05:48.419 CC lib/ftl/ftl_l2p.o 00:05:48.419 CC lib/ftl/ftl_l2p_flat.o 00:05:48.419 CC lib/nvmf/ctrlr.o 00:05:48.419 CC lib/nbd/nbd_rpc.o 00:05:48.419 CC lib/nvmf/ctrlr_discovery.o 00:05:48.419 CC lib/ftl/ftl_nv_cache.o 00:05:48.419 CC lib/nvmf/ctrlr_bdev.o 00:05:48.419 CC lib/ftl/ftl_band.o 00:05:48.419 CC lib/nvmf/subsystem.o 00:05:48.419 CC lib/ublk/ublk.o 00:05:48.419 CC lib/nvmf/nvmf.o 00:05:48.419 CC lib/ublk/ublk_rpc.o 00:05:48.419 CC lib/ftl/ftl_band_ops.o 00:05:48.419 CC lib/ftl/ftl_writer.o 00:05:48.419 CC lib/nvmf/nvmf_rpc.o 00:05:48.419 CC lib/ftl/ftl_rq.o 00:05:48.419 CC lib/nvmf/transport.o 00:05:48.419 CC lib/ftl/ftl_reloc.o 00:05:48.419 CC lib/nvmf/tcp.o 00:05:48.419 CC lib/ftl/ftl_l2p_cache.o 00:05:48.419 CC lib/nvmf/stubs.o 00:05:48.419 CC lib/nvmf/mdns_server.o 00:05:48.419 CC lib/ftl/ftl_p2l.o 00:05:48.419 CC lib/nvmf/vfio_user.o 00:05:48.419 CC lib/nvmf/auth.o 00:05:48.419 CC lib/ftl/ftl_p2l_log.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt.o 00:05:48.419 CC lib/nvmf/rdma.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:48.419 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:48.419 CC lib/ftl/utils/ftl_conf.o 00:05:48.419 CC lib/ftl/utils/ftl_md.o 00:05:48.419 CC lib/ftl/utils/ftl_mempool.o 00:05:48.419 CC lib/ftl/utils/ftl_bitmap.o 00:05:48.419 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:48.419 CC lib/ftl/utils/ftl_property.o 00:05:48.419 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:48.419 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:48.419 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:48.419 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:48.419 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:48.419 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:48.419 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:48.419 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:48.419 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:48.419 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:48.419 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:48.419 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:48.419 CC lib/ftl/base/ftl_base_dev.o 00:05:48.419 CC lib/ftl/ftl_trace.o 00:05:48.419 CC lib/ftl/base/ftl_base_bdev.o 00:05:48.678 LIB libspdk_blobfs.a 00:05:48.678 SO libspdk_blobfs.so.10.0 00:05:48.938 SYMLINK libspdk_blobfs.so 00:05:48.938 LIB libspdk_lvol.a 00:05:48.938 SO libspdk_lvol.so.10.0 00:05:48.938 SYMLINK libspdk_lvol.so 00:05:48.938 LIB libspdk_nbd.a 00:05:48.938 SO libspdk_nbd.so.7.0 00:05:48.938 LIB libspdk_scsi.a 00:05:48.938 SO libspdk_scsi.so.9.0 00:05:48.938 SYMLINK libspdk_nbd.so 00:05:49.199 SYMLINK libspdk_scsi.so 00:05:49.199 LIB libspdk_ublk.a 00:05:49.199 SO libspdk_ublk.so.3.0 00:05:49.199 SYMLINK libspdk_ublk.so 00:05:49.459 CC lib/vhost/vhost.o 00:05:49.459 CC lib/vhost/vhost_scsi.o 00:05:49.459 CC lib/vhost/vhost_rpc.o 00:05:49.459 CC lib/vhost/vhost_blk.o 00:05:49.459 CC lib/vhost/rte_vhost_user.o 00:05:49.459 CC lib/iscsi/conn.o 00:05:49.459 CC lib/iscsi/init_grp.o 00:05:49.459 CC lib/iscsi/iscsi.o 00:05:49.459 CC lib/iscsi/param.o 00:05:49.459 CC lib/iscsi/portal_grp.o 00:05:49.460 CC lib/iscsi/tgt_node.o 00:05:49.460 CC lib/iscsi/iscsi_subsystem.o 00:05:49.460 CC lib/iscsi/iscsi_rpc.o 00:05:49.460 CC lib/iscsi/task.o 00:05:49.460 LIB libspdk_ftl.a 00:05:49.721 SO libspdk_ftl.so.9.0 00:05:49.982 SYMLINK libspdk_ftl.so 00:05:50.553 LIB libspdk_nvmf.a 00:05:50.553 LIB libspdk_vhost.a 00:05:50.553 SO libspdk_nvmf.so.20.0 00:05:50.553 SO libspdk_vhost.so.8.0 00:05:50.553 SYMLINK libspdk_vhost.so 00:05:50.553 SYMLINK libspdk_nvmf.so 00:05:50.814 LIB libspdk_iscsi.a 00:05:50.814 SO libspdk_iscsi.so.8.0 00:05:51.076 SYMLINK libspdk_iscsi.so 00:05:51.648 CC module/env_dpdk/env_dpdk_rpc.o 00:05:51.648 CC module/vfu_device/vfu_virtio.o 00:05:51.648 CC module/vfu_device/vfu_virtio_blk.o 00:05:51.648 CC module/vfu_device/vfu_virtio_scsi.o 00:05:51.648 CC module/vfu_device/vfu_virtio_rpc.o 00:05:51.648 CC module/vfu_device/vfu_virtio_fs.o 00:05:51.648 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:51.648 CC module/sock/posix/posix.o 00:05:51.648 LIB libspdk_env_dpdk_rpc.a 00:05:51.648 CC module/accel/error/accel_error.o 00:05:51.648 CC module/accel/error/accel_error_rpc.o 00:05:51.648 CC module/blob/bdev/blob_bdev.o 00:05:51.648 CC module/accel/ioat/accel_ioat.o 00:05:51.648 CC module/accel/ioat/accel_ioat_rpc.o 00:05:51.648 CC module/accel/dsa/accel_dsa.o 00:05:51.648 CC module/accel/dsa/accel_dsa_rpc.o 00:05:51.648 CC module/keyring/file/keyring.o 00:05:51.648 CC module/scheduler/gscheduler/gscheduler.o 00:05:51.648 CC module/keyring/file/keyring_rpc.o 00:05:51.648 CC module/keyring/linux/keyring.o 00:05:51.648 CC module/keyring/linux/keyring_rpc.o 00:05:51.648 CC module/accel/iaa/accel_iaa.o 00:05:51.648 CC module/accel/iaa/accel_iaa_rpc.o 00:05:51.648 CC module/fsdev/aio/fsdev_aio.o 00:05:51.648 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:51.648 CC module/fsdev/aio/linux_aio_mgr.o 00:05:51.648 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:51.648 SO libspdk_env_dpdk_rpc.so.6.0 00:05:51.910 SYMLINK libspdk_env_dpdk_rpc.so 00:05:51.910 LIB libspdk_scheduler_gscheduler.a 00:05:51.910 LIB libspdk_keyring_linux.a 00:05:51.910 LIB libspdk_accel_error.a 00:05:51.910 LIB libspdk_scheduler_dynamic.a 00:05:51.910 LIB libspdk_scheduler_dpdk_governor.a 00:05:51.910 LIB libspdk_keyring_file.a 00:05:51.910 LIB libspdk_accel_ioat.a 00:05:51.910 SO libspdk_accel_error.so.2.0 00:05:51.910 SO libspdk_scheduler_gscheduler.so.4.0 00:05:51.910 SO libspdk_keyring_linux.so.1.0 00:05:51.910 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:51.910 SO libspdk_scheduler_dynamic.so.4.0 00:05:51.910 SO libspdk_keyring_file.so.2.0 00:05:51.910 SO libspdk_accel_ioat.so.6.0 00:05:51.910 LIB libspdk_accel_iaa.a 00:05:51.910 SYMLINK libspdk_accel_error.so 00:05:51.910 SYMLINK libspdk_scheduler_gscheduler.so 00:05:51.910 SYMLINK libspdk_keyring_linux.so 00:05:51.910 LIB libspdk_blob_bdev.a 00:05:51.910 SO libspdk_accel_iaa.so.3.0 00:05:51.910 SYMLINK libspdk_keyring_file.so 00:05:51.910 SYMLINK libspdk_scheduler_dynamic.so 00:05:51.910 LIB libspdk_accel_dsa.a 00:05:51.910 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:51.910 SYMLINK libspdk_accel_ioat.so 00:05:51.910 SO libspdk_blob_bdev.so.11.0 00:05:51.910 SO libspdk_accel_dsa.so.5.0 00:05:52.170 SYMLINK libspdk_accel_iaa.so 00:05:52.170 SYMLINK libspdk_blob_bdev.so 00:05:52.170 LIB libspdk_vfu_device.a 00:05:52.170 SYMLINK libspdk_accel_dsa.so 00:05:52.170 SO libspdk_vfu_device.so.3.0 00:05:52.170 SYMLINK libspdk_vfu_device.so 00:05:52.432 LIB libspdk_fsdev_aio.a 00:05:52.432 LIB libspdk_sock_posix.a 00:05:52.432 SO libspdk_fsdev_aio.so.1.0 00:05:52.432 SO libspdk_sock_posix.so.6.0 00:05:52.432 SYMLINK libspdk_fsdev_aio.so 00:05:52.432 SYMLINK libspdk_sock_posix.so 00:05:52.690 CC module/bdev/delay/vbdev_delay.o 00:05:52.690 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:52.690 CC module/bdev/gpt/gpt.o 00:05:52.690 CC module/bdev/gpt/vbdev_gpt.o 00:05:52.690 CC module/bdev/null/bdev_null.o 00:05:52.690 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:52.690 CC module/bdev/null/bdev_null_rpc.o 00:05:52.690 CC module/bdev/malloc/bdev_malloc.o 00:05:52.690 CC module/bdev/lvol/vbdev_lvol.o 00:05:52.690 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:52.690 CC module/bdev/nvme/bdev_nvme.o 00:05:52.690 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:52.690 CC module/bdev/nvme/bdev_mdns_client.o 00:05:52.690 CC module/bdev/nvme/nvme_rpc.o 00:05:52.690 CC module/bdev/error/vbdev_error.o 00:05:52.690 CC module/bdev/raid/bdev_raid.o 00:05:52.690 CC module/bdev/nvme/vbdev_opal.o 00:05:52.690 CC module/bdev/raid/bdev_raid_rpc.o 00:05:52.690 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:52.690 CC module/bdev/error/vbdev_error_rpc.o 00:05:52.690 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:52.690 CC module/bdev/raid/bdev_raid_sb.o 00:05:52.690 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:52.690 CC module/bdev/raid/raid0.o 00:05:52.690 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:52.690 CC module/bdev/split/vbdev_split.o 00:05:52.690 CC module/bdev/raid/raid1.o 00:05:52.690 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:52.690 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:52.690 CC module/bdev/split/vbdev_split_rpc.o 00:05:52.690 CC module/bdev/raid/concat.o 00:05:52.690 CC module/bdev/iscsi/bdev_iscsi.o 00:05:52.690 CC module/blobfs/bdev/blobfs_bdev.o 00:05:52.690 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:52.690 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:52.690 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:52.690 CC module/bdev/aio/bdev_aio.o 00:05:52.690 CC module/bdev/aio/bdev_aio_rpc.o 00:05:52.690 CC module/bdev/passthru/vbdev_passthru.o 00:05:52.690 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:52.690 CC module/bdev/ftl/bdev_ftl.o 00:05:52.690 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:52.949 LIB libspdk_bdev_null.a 00:05:52.949 SO libspdk_bdev_null.so.6.0 00:05:52.949 LIB libspdk_blobfs_bdev.a 00:05:52.950 LIB libspdk_bdev_split.a 00:05:52.950 SO libspdk_blobfs_bdev.so.6.0 00:05:52.950 LIB libspdk_bdev_gpt.a 00:05:52.950 SO libspdk_bdev_split.so.6.0 00:05:52.950 SYMLINK libspdk_bdev_null.so 00:05:52.950 LIB libspdk_bdev_error.a 00:05:52.950 SYMLINK libspdk_blobfs_bdev.so 00:05:52.950 SO libspdk_bdev_gpt.so.6.0 00:05:52.950 LIB libspdk_bdev_ftl.a 00:05:52.950 LIB libspdk_bdev_passthru.a 00:05:52.950 SO libspdk_bdev_error.so.6.0 00:05:52.950 LIB libspdk_bdev_zone_block.a 00:05:52.950 SYMLINK libspdk_bdev_split.so 00:05:52.950 LIB libspdk_bdev_delay.a 00:05:52.950 LIB libspdk_bdev_malloc.a 00:05:52.950 LIB libspdk_bdev_aio.a 00:05:52.950 SO libspdk_bdev_ftl.so.6.0 00:05:52.950 SO libspdk_bdev_passthru.so.6.0 00:05:52.950 SO libspdk_bdev_zone_block.so.6.0 00:05:52.950 SYMLINK libspdk_bdev_gpt.so 00:05:52.950 LIB libspdk_bdev_iscsi.a 00:05:53.210 SO libspdk_bdev_delay.so.6.0 00:05:53.210 SO libspdk_bdev_malloc.so.6.0 00:05:53.210 SYMLINK libspdk_bdev_error.so 00:05:53.210 SO libspdk_bdev_aio.so.6.0 00:05:53.210 SO libspdk_bdev_iscsi.so.6.0 00:05:53.210 SYMLINK libspdk_bdev_ftl.so 00:05:53.210 SYMLINK libspdk_bdev_zone_block.so 00:05:53.210 SYMLINK libspdk_bdev_passthru.so 00:05:53.210 SYMLINK libspdk_bdev_delay.so 00:05:53.210 SYMLINK libspdk_bdev_malloc.so 00:05:53.210 LIB libspdk_bdev_lvol.a 00:05:53.210 SYMLINK libspdk_bdev_aio.so 00:05:53.210 SYMLINK libspdk_bdev_iscsi.so 00:05:53.210 LIB libspdk_bdev_virtio.a 00:05:53.210 SO libspdk_bdev_lvol.so.6.0 00:05:53.210 SO libspdk_bdev_virtio.so.6.0 00:05:53.210 SYMLINK libspdk_bdev_lvol.so 00:05:53.210 SYMLINK libspdk_bdev_virtio.so 00:05:53.783 LIB libspdk_bdev_raid.a 00:05:53.783 SO libspdk_bdev_raid.so.6.0 00:05:53.783 SYMLINK libspdk_bdev_raid.so 00:05:54.726 LIB libspdk_bdev_nvme.a 00:05:54.986 SO libspdk_bdev_nvme.so.7.1 00:05:54.987 SYMLINK libspdk_bdev_nvme.so 00:05:55.928 CC module/event/subsystems/scheduler/scheduler.o 00:05:55.928 CC module/event/subsystems/fsdev/fsdev.o 00:05:55.928 CC module/event/subsystems/iobuf/iobuf.o 00:05:55.928 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:55.928 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:55.928 CC module/event/subsystems/sock/sock.o 00:05:55.928 CC module/event/subsystems/vmd/vmd.o 00:05:55.928 CC module/event/subsystems/keyring/keyring.o 00:05:55.928 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:55.928 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:55.928 LIB libspdk_event_scheduler.a 00:05:55.928 LIB libspdk_event_vhost_blk.a 00:05:55.928 LIB libspdk_event_fsdev.a 00:05:55.928 LIB libspdk_event_vmd.a 00:05:55.928 LIB libspdk_event_keyring.a 00:05:55.928 LIB libspdk_event_sock.a 00:05:55.928 LIB libspdk_event_iobuf.a 00:05:55.928 LIB libspdk_event_vfu_tgt.a 00:05:55.928 SO libspdk_event_scheduler.so.4.0 00:05:55.928 SO libspdk_event_vhost_blk.so.3.0 00:05:55.928 SO libspdk_event_fsdev.so.1.0 00:05:55.928 SO libspdk_event_vmd.so.6.0 00:05:55.928 SO libspdk_event_vfu_tgt.so.3.0 00:05:55.928 SO libspdk_event_keyring.so.1.0 00:05:55.928 SO libspdk_event_sock.so.5.0 00:05:55.928 SO libspdk_event_iobuf.so.3.0 00:05:55.928 SYMLINK libspdk_event_scheduler.so 00:05:55.928 SYMLINK libspdk_event_fsdev.so 00:05:55.928 SYMLINK libspdk_event_vhost_blk.so 00:05:55.928 SYMLINK libspdk_event_vmd.so 00:05:55.928 SYMLINK libspdk_event_keyring.so 00:05:55.928 SYMLINK libspdk_event_vfu_tgt.so 00:05:55.928 SYMLINK libspdk_event_sock.so 00:05:55.928 SYMLINK libspdk_event_iobuf.so 00:05:56.501 CC module/event/subsystems/accel/accel.o 00:05:56.501 LIB libspdk_event_accel.a 00:05:56.501 SO libspdk_event_accel.so.6.0 00:05:56.761 SYMLINK libspdk_event_accel.so 00:05:57.022 CC module/event/subsystems/bdev/bdev.o 00:05:57.282 LIB libspdk_event_bdev.a 00:05:57.282 SO libspdk_event_bdev.so.6.0 00:05:57.282 SYMLINK libspdk_event_bdev.so 00:05:57.543 CC module/event/subsystems/nbd/nbd.o 00:05:57.543 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:57.543 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:57.543 CC module/event/subsystems/scsi/scsi.o 00:05:57.543 CC module/event/subsystems/ublk/ublk.o 00:05:57.804 LIB libspdk_event_nbd.a 00:05:57.804 LIB libspdk_event_ublk.a 00:05:57.804 SO libspdk_event_nbd.so.6.0 00:05:57.804 LIB libspdk_event_scsi.a 00:05:57.804 SO libspdk_event_ublk.so.3.0 00:05:57.804 SO libspdk_event_scsi.so.6.0 00:05:57.804 LIB libspdk_event_nvmf.a 00:05:57.804 SYMLINK libspdk_event_nbd.so 00:05:57.804 SO libspdk_event_nvmf.so.6.0 00:05:57.804 SYMLINK libspdk_event_ublk.so 00:05:57.804 SYMLINK libspdk_event_scsi.so 00:05:58.065 SYMLINK libspdk_event_nvmf.so 00:05:58.326 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:58.326 CC module/event/subsystems/iscsi/iscsi.o 00:05:58.326 LIB libspdk_event_vhost_scsi.a 00:05:58.586 LIB libspdk_event_iscsi.a 00:05:58.586 SO libspdk_event_vhost_scsi.so.3.0 00:05:58.586 SO libspdk_event_iscsi.so.6.0 00:05:58.586 SYMLINK libspdk_event_vhost_scsi.so 00:05:58.586 SYMLINK libspdk_event_iscsi.so 00:05:58.847 SO libspdk.so.6.0 00:05:58.847 SYMLINK libspdk.so 00:05:59.108 CC test/rpc_client/rpc_client_test.o 00:05:59.108 TEST_HEADER include/spdk/accel_module.h 00:05:59.108 CC app/spdk_top/spdk_top.o 00:05:59.108 TEST_HEADER include/spdk/accel.h 00:05:59.108 TEST_HEADER include/spdk/assert.h 00:05:59.108 TEST_HEADER include/spdk/barrier.h 00:05:59.108 TEST_HEADER include/spdk/base64.h 00:05:59.108 TEST_HEADER include/spdk/bdev.h 00:05:59.108 TEST_HEADER include/spdk/bdev_module.h 00:05:59.108 TEST_HEADER include/spdk/bdev_zone.h 00:05:59.108 TEST_HEADER include/spdk/bit_array.h 00:05:59.108 TEST_HEADER include/spdk/bit_pool.h 00:05:59.108 TEST_HEADER include/spdk/blob_bdev.h 00:05:59.108 CC app/spdk_nvme_discover/discovery_aer.o 00:05:59.108 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:59.108 TEST_HEADER include/spdk/blobfs.h 00:05:59.108 CC app/trace_record/trace_record.o 00:05:59.108 TEST_HEADER include/spdk/blob.h 00:05:59.108 TEST_HEADER include/spdk/conf.h 00:05:59.108 TEST_HEADER include/spdk/config.h 00:05:59.108 TEST_HEADER include/spdk/cpuset.h 00:05:59.108 CXX app/trace/trace.o 00:05:59.108 TEST_HEADER include/spdk/crc16.h 00:05:59.108 TEST_HEADER include/spdk/crc64.h 00:05:59.108 CC app/spdk_lspci/spdk_lspci.o 00:05:59.108 TEST_HEADER include/spdk/crc32.h 00:05:59.108 TEST_HEADER include/spdk/dif.h 00:05:59.108 CC app/spdk_nvme_identify/identify.o 00:05:59.108 TEST_HEADER include/spdk/endian.h 00:05:59.108 TEST_HEADER include/spdk/dma.h 00:05:59.108 CC app/spdk_nvme_perf/perf.o 00:05:59.108 TEST_HEADER include/spdk/env_dpdk.h 00:05:59.108 TEST_HEADER include/spdk/env.h 00:05:59.108 TEST_HEADER include/spdk/fd_group.h 00:05:59.108 TEST_HEADER include/spdk/event.h 00:05:59.108 TEST_HEADER include/spdk/fd.h 00:05:59.108 TEST_HEADER include/spdk/file.h 00:05:59.108 TEST_HEADER include/spdk/fsdev.h 00:05:59.108 TEST_HEADER include/spdk/fsdev_module.h 00:05:59.108 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:59.108 TEST_HEADER include/spdk/ftl.h 00:05:59.108 TEST_HEADER include/spdk/hexlify.h 00:05:59.108 TEST_HEADER include/spdk/idxd.h 00:05:59.108 TEST_HEADER include/spdk/gpt_spec.h 00:05:59.108 TEST_HEADER include/spdk/histogram_data.h 00:05:59.108 TEST_HEADER include/spdk/idxd_spec.h 00:05:59.108 TEST_HEADER include/spdk/init.h 00:05:59.108 TEST_HEADER include/spdk/ioat.h 00:05:59.108 TEST_HEADER include/spdk/iscsi_spec.h 00:05:59.108 TEST_HEADER include/spdk/ioat_spec.h 00:05:59.108 TEST_HEADER include/spdk/jsonrpc.h 00:05:59.108 TEST_HEADER include/spdk/json.h 00:05:59.108 TEST_HEADER include/spdk/keyring.h 00:05:59.108 TEST_HEADER include/spdk/keyring_module.h 00:05:59.108 TEST_HEADER include/spdk/lvol.h 00:05:59.108 TEST_HEADER include/spdk/likely.h 00:05:59.108 TEST_HEADER include/spdk/log.h 00:05:59.108 TEST_HEADER include/spdk/md5.h 00:05:59.108 TEST_HEADER include/spdk/memory.h 00:05:59.108 TEST_HEADER include/spdk/mmio.h 00:05:59.108 CC app/nvmf_tgt/nvmf_main.o 00:05:59.108 TEST_HEADER include/spdk/net.h 00:05:59.108 TEST_HEADER include/spdk/nbd.h 00:05:59.108 TEST_HEADER include/spdk/nvme.h 00:05:59.108 TEST_HEADER include/spdk/notify.h 00:05:59.108 TEST_HEADER include/spdk/nvme_intel.h 00:05:59.108 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:59.108 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:59.108 TEST_HEADER include/spdk/nvme_spec.h 00:05:59.108 CC app/iscsi_tgt/iscsi_tgt.o 00:05:59.108 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:59.108 TEST_HEADER include/spdk/nvme_zns.h 00:05:59.108 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:59.108 TEST_HEADER include/spdk/nvmf.h 00:05:59.108 TEST_HEADER include/spdk/nvmf_transport.h 00:05:59.108 TEST_HEADER include/spdk/nvmf_spec.h 00:05:59.108 TEST_HEADER include/spdk/opal.h 00:05:59.108 CC app/spdk_dd/spdk_dd.o 00:05:59.108 TEST_HEADER include/spdk/opal_spec.h 00:05:59.108 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:59.108 TEST_HEADER include/spdk/pci_ids.h 00:05:59.108 TEST_HEADER include/spdk/pipe.h 00:05:59.108 TEST_HEADER include/spdk/queue.h 00:05:59.108 TEST_HEADER include/spdk/rpc.h 00:05:59.108 TEST_HEADER include/spdk/reduce.h 00:05:59.108 TEST_HEADER include/spdk/scheduler.h 00:05:59.108 TEST_HEADER include/spdk/scsi.h 00:05:59.108 TEST_HEADER include/spdk/scsi_spec.h 00:05:59.108 TEST_HEADER include/spdk/sock.h 00:05:59.108 TEST_HEADER include/spdk/stdinc.h 00:05:59.108 TEST_HEADER include/spdk/string.h 00:05:59.108 TEST_HEADER include/spdk/trace.h 00:05:59.108 TEST_HEADER include/spdk/thread.h 00:05:59.108 TEST_HEADER include/spdk/trace_parser.h 00:05:59.108 TEST_HEADER include/spdk/tree.h 00:05:59.108 TEST_HEADER include/spdk/util.h 00:05:59.108 TEST_HEADER include/spdk/ublk.h 00:05:59.370 TEST_HEADER include/spdk/uuid.h 00:05:59.370 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:59.370 TEST_HEADER include/spdk/version.h 00:05:59.370 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:59.370 TEST_HEADER include/spdk/vhost.h 00:05:59.370 TEST_HEADER include/spdk/vmd.h 00:05:59.370 TEST_HEADER include/spdk/zipf.h 00:05:59.370 TEST_HEADER include/spdk/xor.h 00:05:59.370 CXX test/cpp_headers/accel.o 00:05:59.370 CC app/spdk_tgt/spdk_tgt.o 00:05:59.370 CXX test/cpp_headers/assert.o 00:05:59.370 CXX test/cpp_headers/accel_module.o 00:05:59.370 CXX test/cpp_headers/barrier.o 00:05:59.370 CXX test/cpp_headers/base64.o 00:05:59.370 CXX test/cpp_headers/bdev.o 00:05:59.370 CXX test/cpp_headers/bdev_module.o 00:05:59.370 CXX test/cpp_headers/bdev_zone.o 00:05:59.370 CXX test/cpp_headers/bit_array.o 00:05:59.370 CXX test/cpp_headers/bit_pool.o 00:05:59.370 CXX test/cpp_headers/blobfs_bdev.o 00:05:59.370 CXX test/cpp_headers/blob_bdev.o 00:05:59.370 CXX test/cpp_headers/blobfs.o 00:05:59.370 CXX test/cpp_headers/blob.o 00:05:59.370 CXX test/cpp_headers/conf.o 00:05:59.370 CXX test/cpp_headers/config.o 00:05:59.370 CXX test/cpp_headers/cpuset.o 00:05:59.370 CXX test/cpp_headers/crc16.o 00:05:59.370 CXX test/cpp_headers/crc32.o 00:05:59.370 CXX test/cpp_headers/crc64.o 00:05:59.370 CXX test/cpp_headers/dma.o 00:05:59.370 CXX test/cpp_headers/dif.o 00:05:59.370 CXX test/cpp_headers/event.o 00:05:59.370 CXX test/cpp_headers/endian.o 00:05:59.370 CXX test/cpp_headers/env_dpdk.o 00:05:59.370 CXX test/cpp_headers/env.o 00:05:59.370 CXX test/cpp_headers/fd_group.o 00:05:59.370 CXX test/cpp_headers/file.o 00:05:59.370 CXX test/cpp_headers/fsdev.o 00:05:59.370 CXX test/cpp_headers/fsdev_module.o 00:05:59.370 CXX test/cpp_headers/fd.o 00:05:59.370 CXX test/cpp_headers/fuse_dispatcher.o 00:05:59.370 CXX test/cpp_headers/hexlify.o 00:05:59.370 CXX test/cpp_headers/ftl.o 00:05:59.370 CXX test/cpp_headers/gpt_spec.o 00:05:59.370 CXX test/cpp_headers/idxd.o 00:05:59.370 CXX test/cpp_headers/histogram_data.o 00:05:59.370 CXX test/cpp_headers/idxd_spec.o 00:05:59.370 CXX test/cpp_headers/ioat_spec.o 00:05:59.370 CXX test/cpp_headers/init.o 00:05:59.370 CXX test/cpp_headers/ioat.o 00:05:59.370 CXX test/cpp_headers/iscsi_spec.o 00:05:59.370 CXX test/cpp_headers/json.o 00:05:59.370 CXX test/cpp_headers/keyring.o 00:05:59.370 CXX test/cpp_headers/jsonrpc.o 00:05:59.370 CXX test/cpp_headers/keyring_module.o 00:05:59.370 CXX test/cpp_headers/likely.o 00:05:59.370 CXX test/cpp_headers/lvol.o 00:05:59.370 CXX test/cpp_headers/log.o 00:05:59.370 CXX test/cpp_headers/md5.o 00:05:59.370 CXX test/cpp_headers/net.o 00:05:59.370 CXX test/cpp_headers/nbd.o 00:05:59.370 CXX test/cpp_headers/memory.o 00:05:59.370 CXX test/cpp_headers/notify.o 00:05:59.370 CXX test/cpp_headers/mmio.o 00:05:59.370 CXX test/cpp_headers/nvme_intel.o 00:05:59.370 CXX test/cpp_headers/nvme_spec.o 00:05:59.370 CXX test/cpp_headers/nvme_zns.o 00:05:59.370 CXX test/cpp_headers/nvme_ocssd.o 00:05:59.370 CXX test/cpp_headers/nvme.o 00:05:59.370 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:59.370 CXX test/cpp_headers/nvmf_cmd.o 00:05:59.370 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:59.370 CXX test/cpp_headers/nvmf_spec.o 00:05:59.370 CXX test/cpp_headers/opal.o 00:05:59.370 CXX test/cpp_headers/nvmf.o 00:05:59.370 CXX test/cpp_headers/pci_ids.o 00:05:59.370 CXX test/cpp_headers/queue.o 00:05:59.370 CXX test/cpp_headers/nvmf_transport.o 00:05:59.370 CXX test/cpp_headers/opal_spec.o 00:05:59.370 CXX test/cpp_headers/pipe.o 00:05:59.370 CXX test/cpp_headers/reduce.o 00:05:59.370 CXX test/cpp_headers/scsi.o 00:05:59.370 CXX test/cpp_headers/rpc.o 00:05:59.370 CXX test/cpp_headers/scheduler.o 00:05:59.370 CXX test/cpp_headers/scsi_spec.o 00:05:59.370 CXX test/cpp_headers/stdinc.o 00:05:59.370 CXX test/cpp_headers/sock.o 00:05:59.370 CXX test/cpp_headers/string.o 00:05:59.370 CXX test/cpp_headers/thread.o 00:05:59.370 CXX test/cpp_headers/trace.o 00:05:59.370 CXX test/cpp_headers/trace_parser.o 00:05:59.370 CXX test/cpp_headers/tree.o 00:05:59.370 CC test/dma/test_dma/test_dma.o 00:05:59.370 CXX test/cpp_headers/ublk.o 00:05:59.370 CXX test/cpp_headers/util.o 00:05:59.370 CC test/app/histogram_perf/histogram_perf.o 00:05:59.370 CXX test/cpp_headers/uuid.o 00:05:59.370 CXX test/cpp_headers/vfio_user_spec.o 00:05:59.370 CXX test/cpp_headers/version.o 00:05:59.370 CXX test/cpp_headers/vhost.o 00:05:59.370 CXX test/cpp_headers/vfio_user_pci.o 00:05:59.370 CC test/env/pci/pci_ut.o 00:05:59.370 CXX test/cpp_headers/vmd.o 00:05:59.370 CC test/thread/poller_perf/poller_perf.o 00:05:59.370 CXX test/cpp_headers/zipf.o 00:05:59.370 CXX test/cpp_headers/xor.o 00:05:59.370 CC test/env/memory/memory_ut.o 00:05:59.370 CC test/env/vtophys/vtophys.o 00:05:59.370 CC test/app/jsoncat/jsoncat.o 00:05:59.370 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:59.370 CC test/app/stub/stub.o 00:05:59.370 CC examples/ioat/verify/verify.o 00:05:59.370 CC app/fio/nvme/fio_plugin.o 00:05:59.370 CC test/app/bdev_svc/bdev_svc.o 00:05:59.370 CC examples/ioat/perf/perf.o 00:05:59.370 CC examples/util/zipf/zipf.o 00:05:59.370 LINK spdk_lspci 00:05:59.633 LINK rpc_client_test 00:05:59.633 CC app/fio/bdev/fio_plugin.o 00:05:59.633 LINK spdk_nvme_discover 00:05:59.633 LINK nvmf_tgt 00:05:59.633 LINK spdk_trace_record 00:05:59.633 LINK interrupt_tgt 00:05:59.892 CC test/env/mem_callbacks/mem_callbacks.o 00:05:59.892 LINK iscsi_tgt 00:05:59.892 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:59.892 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:59.892 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:59.892 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:59.892 LINK jsoncat 00:05:59.892 LINK poller_perf 00:05:59.892 LINK env_dpdk_post_init 00:05:59.892 LINK zipf 00:05:59.892 LINK spdk_tgt 00:05:59.892 LINK verify 00:05:59.892 LINK histogram_perf 00:06:00.152 LINK vtophys 00:06:00.152 LINK spdk_dd 00:06:00.152 LINK ioat_perf 00:06:00.152 LINK bdev_svc 00:06:00.152 LINK stub 00:06:00.152 LINK spdk_top 00:06:00.152 LINK pci_ut 00:06:00.152 LINK spdk_trace 00:06:00.414 LINK spdk_nvme_perf 00:06:00.414 LINK spdk_nvme_identify 00:06:00.414 CC examples/sock/hello_world/hello_sock.o 00:06:00.414 LINK nvme_fuzz 00:06:00.414 CC examples/vmd/lsvmd/lsvmd.o 00:06:00.414 LINK spdk_nvme 00:06:00.414 CC examples/vmd/led/led.o 00:06:00.414 CC examples/idxd/perf/perf.o 00:06:00.414 LINK vhost_fuzz 00:06:00.414 CC examples/thread/thread/thread_ex.o 00:06:00.414 LINK spdk_bdev 00:06:00.414 CC test/event/event_perf/event_perf.o 00:06:00.414 LINK test_dma 00:06:00.414 CC test/event/reactor_perf/reactor_perf.o 00:06:00.414 CC test/event/reactor/reactor.o 00:06:00.414 CC test/event/app_repeat/app_repeat.o 00:06:00.414 CC test/event/scheduler/scheduler.o 00:06:00.414 LINK mem_callbacks 00:06:00.674 LINK lsvmd 00:06:00.674 CC app/vhost/vhost.o 00:06:00.674 LINK led 00:06:00.674 LINK event_perf 00:06:00.674 LINK reactor 00:06:00.674 LINK reactor_perf 00:06:00.674 LINK hello_sock 00:06:00.674 LINK app_repeat 00:06:00.674 LINK thread 00:06:00.674 LINK idxd_perf 00:06:00.674 LINK scheduler 00:06:00.674 LINK vhost 00:06:00.936 LINK memory_ut 00:06:00.936 CC test/nvme/reset/reset.o 00:06:00.936 CC test/nvme/sgl/sgl.o 00:06:00.936 CC test/nvme/overhead/overhead.o 00:06:00.936 CC test/nvme/reserve/reserve.o 00:06:00.936 CC test/accel/dif/dif.o 00:06:00.936 CC test/nvme/connect_stress/connect_stress.o 00:06:00.936 CC test/nvme/e2edp/nvme_dp.o 00:06:00.936 CC test/nvme/simple_copy/simple_copy.o 00:06:00.936 CC test/nvme/fused_ordering/fused_ordering.o 00:06:00.936 CC test/nvme/err_injection/err_injection.o 00:06:00.936 CC test/nvme/fdp/fdp.o 00:06:00.936 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:00.936 CC test/nvme/aer/aer.o 00:06:00.936 CC test/nvme/boot_partition/boot_partition.o 00:06:00.936 CC test/nvme/cuse/cuse.o 00:06:00.936 CC test/nvme/startup/startup.o 00:06:00.936 CC test/nvme/compliance/nvme_compliance.o 00:06:01.196 CC test/blobfs/mkfs/mkfs.o 00:06:01.196 CC examples/nvme/hotplug/hotplug.o 00:06:01.196 CC examples/nvme/hello_world/hello_world.o 00:06:01.196 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:01.196 CC examples/nvme/reconnect/reconnect.o 00:06:01.196 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:01.196 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:01.196 CC examples/nvme/abort/abort.o 00:06:01.196 CC examples/nvme/arbitration/arbitration.o 00:06:01.196 CC test/lvol/esnap/esnap.o 00:06:01.196 LINK boot_partition 00:06:01.196 LINK fused_ordering 00:06:01.196 LINK connect_stress 00:06:01.196 CC examples/accel/perf/accel_perf.o 00:06:01.196 LINK reserve 00:06:01.196 LINK startup 00:06:01.196 LINK err_injection 00:06:01.196 LINK doorbell_aers 00:06:01.196 LINK mkfs 00:06:01.196 LINK reset 00:06:01.455 CC examples/blob/cli/blobcli.o 00:06:01.455 LINK simple_copy 00:06:01.455 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:01.455 CC examples/blob/hello_world/hello_blob.o 00:06:01.455 LINK sgl 00:06:01.455 LINK nvme_dp 00:06:01.455 LINK overhead 00:06:01.455 LINK aer 00:06:01.455 LINK nvme_compliance 00:06:01.455 LINK pmr_persistence 00:06:01.455 LINK iscsi_fuzz 00:06:01.455 LINK fdp 00:06:01.455 LINK hotplug 00:06:01.455 LINK cmb_copy 00:06:01.455 LINK hello_world 00:06:01.455 LINK reconnect 00:06:01.455 LINK arbitration 00:06:01.455 LINK abort 00:06:01.717 LINK hello_blob 00:06:01.717 LINK nvme_manage 00:06:01.717 LINK hello_fsdev 00:06:01.717 LINK dif 00:06:01.717 LINK accel_perf 00:06:01.717 LINK blobcli 00:06:02.291 LINK cuse 00:06:02.291 CC test/bdev/bdevio/bdevio.o 00:06:02.291 CC examples/bdev/hello_world/hello_bdev.o 00:06:02.291 CC examples/bdev/bdevperf/bdevperf.o 00:06:02.552 LINK hello_bdev 00:06:02.552 LINK bdevio 00:06:03.124 LINK bdevperf 00:06:03.695 CC examples/nvmf/nvmf/nvmf.o 00:06:03.956 LINK nvmf 00:06:05.339 LINK esnap 00:06:05.911 00:06:05.911 real 0m53.648s 00:06:05.911 user 7m46.251s 00:06:05.911 sys 4m26.351s 00:06:05.911 09:58:09 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:05.911 09:58:09 make -- common/autotest_common.sh@10 -- $ set +x 00:06:05.911 ************************************ 00:06:05.911 END TEST make 00:06:05.911 ************************************ 00:06:05.911 09:58:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:05.911 09:58:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:05.911 09:58:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:05.911 09:58:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.911 09:58:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:05.911 09:58:09 -- pm/common@44 -- $ pid=3549461 00:06:05.911 09:58:09 -- pm/common@50 -- $ kill -TERM 3549461 00:06:05.911 09:58:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.911 09:58:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:05.911 09:58:09 -- pm/common@44 -- $ pid=3549462 00:06:05.911 09:58:09 -- pm/common@50 -- $ kill -TERM 3549462 00:06:05.911 09:58:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.911 09:58:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:05.911 09:58:09 -- pm/common@44 -- $ pid=3549464 00:06:05.911 09:58:09 -- pm/common@50 -- $ kill -TERM 3549464 00:06:05.911 09:58:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:05.911 09:58:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:05.911 09:58:09 -- pm/common@44 -- $ pid=3549487 00:06:05.911 09:58:09 -- pm/common@50 -- $ sudo -E kill -TERM 3549487 00:06:05.911 09:58:09 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:05.911 09:58:09 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:05.911 09:58:09 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.911 09:58:09 -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.911 09:58:09 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.911 09:58:09 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.911 09:58:09 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.911 09:58:09 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.911 09:58:09 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.911 09:58:09 -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.911 09:58:09 -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.911 09:58:09 -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.911 09:58:09 -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.911 09:58:09 -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.911 09:58:09 -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.911 09:58:09 -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.911 09:58:09 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.911 09:58:09 -- scripts/common.sh@344 -- # case "$op" in 00:06:05.911 09:58:09 -- scripts/common.sh@345 -- # : 1 00:06:05.911 09:58:09 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.911 09:58:09 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.911 09:58:09 -- scripts/common.sh@365 -- # decimal 1 00:06:05.911 09:58:09 -- scripts/common.sh@353 -- # local d=1 00:06:05.911 09:58:09 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.911 09:58:09 -- scripts/common.sh@355 -- # echo 1 00:06:05.911 09:58:09 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.911 09:58:09 -- scripts/common.sh@366 -- # decimal 2 00:06:05.911 09:58:09 -- scripts/common.sh@353 -- # local d=2 00:06:05.911 09:58:09 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.911 09:58:09 -- scripts/common.sh@355 -- # echo 2 00:06:05.911 09:58:09 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.911 09:58:09 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.911 09:58:09 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.911 09:58:09 -- scripts/common.sh@368 -- # return 0 00:06:05.911 09:58:09 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.911 09:58:09 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.912 --rc genhtml_branch_coverage=1 00:06:05.912 --rc genhtml_function_coverage=1 00:06:05.912 --rc genhtml_legend=1 00:06:05.912 --rc geninfo_all_blocks=1 00:06:05.912 --rc geninfo_unexecuted_blocks=1 00:06:05.912 00:06:05.912 ' 00:06:05.912 09:58:09 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.912 --rc genhtml_branch_coverage=1 00:06:05.912 --rc genhtml_function_coverage=1 00:06:05.912 --rc genhtml_legend=1 00:06:05.912 --rc geninfo_all_blocks=1 00:06:05.912 --rc geninfo_unexecuted_blocks=1 00:06:05.912 00:06:05.912 ' 00:06:05.912 09:58:09 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.912 --rc genhtml_branch_coverage=1 00:06:05.912 --rc genhtml_function_coverage=1 00:06:05.912 --rc genhtml_legend=1 00:06:05.912 --rc geninfo_all_blocks=1 00:06:05.912 --rc geninfo_unexecuted_blocks=1 00:06:05.912 00:06:05.912 ' 00:06:05.912 09:58:09 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.912 --rc genhtml_branch_coverage=1 00:06:05.912 --rc genhtml_function_coverage=1 00:06:05.912 --rc genhtml_legend=1 00:06:05.912 --rc geninfo_all_blocks=1 00:06:05.912 --rc geninfo_unexecuted_blocks=1 00:06:05.912 00:06:05.912 ' 00:06:05.912 09:58:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.912 09:58:09 -- nvmf/common.sh@7 -- # uname -s 00:06:05.912 09:58:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.912 09:58:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.912 09:58:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.912 09:58:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.912 09:58:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.912 09:58:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.912 09:58:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.912 09:58:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.912 09:58:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.912 09:58:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.912 09:58:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:05.912 09:58:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:05.912 09:58:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.912 09:58:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.912 09:58:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.912 09:58:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.912 09:58:09 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.912 09:58:09 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.912 09:58:09 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.912 09:58:09 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.912 09:58:09 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.912 09:58:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.912 09:58:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.912 09:58:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.912 09:58:09 -- paths/export.sh@5 -- # export PATH 00:06:05.912 09:58:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.173 09:58:09 -- nvmf/common.sh@51 -- # : 0 00:06:06.173 09:58:09 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.173 09:58:09 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.173 09:58:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.173 09:58:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.173 09:58:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.173 09:58:09 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.173 09:58:09 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.173 09:58:09 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.173 09:58:09 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.173 09:58:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:06.173 09:58:09 -- spdk/autotest.sh@32 -- # uname -s 00:06:06.173 09:58:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:06.173 09:58:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:06.173 09:58:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:06.173 09:58:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:06.173 09:58:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:06.173 09:58:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:06.173 09:58:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:06.173 09:58:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:06.173 09:58:09 -- spdk/autotest.sh@48 -- # udevadm_pid=3615217 00:06:06.173 09:58:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:06.173 09:58:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:06.173 09:58:09 -- pm/common@17 -- # local monitor 00:06:06.173 09:58:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:06.173 09:58:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:06.173 09:58:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:06.173 09:58:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:06.173 09:58:09 -- pm/common@21 -- # date +%s 00:06:06.173 09:58:09 -- pm/common@21 -- # date +%s 00:06:06.173 09:58:09 -- pm/common@25 -- # sleep 1 00:06:06.173 09:58:09 -- pm/common@21 -- # date +%s 00:06:06.173 09:58:09 -- pm/common@21 -- # date +%s 00:06:06.173 09:58:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730883489 00:06:06.173 09:58:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730883489 00:06:06.173 09:58:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730883489 00:06:06.173 09:58:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730883489 00:06:06.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730883489_collect-vmstat.pm.log 00:06:06.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730883489_collect-cpu-load.pm.log 00:06:06.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730883489_collect-cpu-temp.pm.log 00:06:06.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730883489_collect-bmc-pm.bmc.pm.log 00:06:07.114 09:58:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:07.114 09:58:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:07.114 09:58:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.114 09:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:07.114 09:58:10 -- spdk/autotest.sh@59 -- # create_test_list 00:06:07.114 09:58:10 -- common/autotest_common.sh@750 -- # xtrace_disable 00:06:07.114 09:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:07.114 09:58:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:07.114 09:58:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:07.114 09:58:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:07.114 09:58:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:07.114 09:58:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:07.114 09:58:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:07.114 09:58:10 -- common/autotest_common.sh@1455 -- # uname 00:06:07.114 09:58:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:07.114 09:58:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:07.114 09:58:10 -- common/autotest_common.sh@1475 -- # uname 00:06:07.114 09:58:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:07.114 09:58:10 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:07.114 09:58:10 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:07.114 lcov: LCOV version 1.15 00:06:07.114 09:58:10 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:29.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:29.332 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:37.536 09:58:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:37.536 09:58:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.536 09:58:40 -- common/autotest_common.sh@10 -- # set +x 00:06:37.536 09:58:40 -- spdk/autotest.sh@78 -- # rm -f 00:06:37.536 09:58:40 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:40.839 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:06:40.839 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:06:40.839 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:06:40.839 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:06:40.839 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:06:40.839 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:06:40.839 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:06:40.839 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:06:40.839 0000:65:00.0 (144d a80a): Already using the nvme driver 00:06:40.839 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:06:40.839 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:06:40.839 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:06:41.101 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:06:41.101 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:06:41.101 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:06:41.101 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:06:41.101 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:06:41.362 09:58:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:41.362 09:58:44 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:41.362 09:58:44 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:41.362 09:58:44 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:41.362 09:58:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:41.362 09:58:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:41.362 09:58:44 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:41.362 09:58:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:41.362 09:58:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:41.362 09:58:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:41.362 09:58:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:41.362 09:58:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:41.363 09:58:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:41.363 09:58:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:41.363 09:58:44 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:41.363 No valid GPT data, bailing 00:06:41.363 09:58:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:41.363 09:58:44 -- scripts/common.sh@394 -- # pt= 00:06:41.363 09:58:44 -- scripts/common.sh@395 -- # return 1 00:06:41.363 09:58:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:41.363 1+0 records in 00:06:41.363 1+0 records out 00:06:41.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467907 s, 224 MB/s 00:06:41.363 09:58:44 -- spdk/autotest.sh@105 -- # sync 00:06:41.363 09:58:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:41.363 09:58:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:41.363 09:58:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:51.363 09:58:53 -- spdk/autotest.sh@111 -- # uname -s 00:06:51.363 09:58:53 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:51.363 09:58:53 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:51.363 09:58:53 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:53.279 Hugepages 00:06:53.279 node hugesize free / total 00:06:53.279 node0 1048576kB 0 / 0 00:06:53.279 node0 2048kB 0 / 0 00:06:53.279 node1 1048576kB 0 / 0 00:06:53.279 node1 2048kB 0 / 0 00:06:53.279 00:06:53.279 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:53.279 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:53.279 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:53.279 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:53.279 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:53.279 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:53.279 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:53.279 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:53.279 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:53.541 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:53.541 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:53.541 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:53.541 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:53.541 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:53.541 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:53.541 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:53.541 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:53.541 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:53.541 09:58:56 -- spdk/autotest.sh@117 -- # uname -s 00:06:53.541 09:58:56 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:53.541 09:58:56 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:53.541 09:58:56 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:57.748 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:57.748 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:59.662 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:59.662 09:59:02 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:00.617 09:59:03 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:00.617 09:59:03 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:00.617 09:59:03 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:00.617 09:59:03 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:00.617 09:59:03 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:00.617 09:59:03 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:00.617 09:59:03 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:00.617 09:59:03 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:00.617 09:59:03 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:00.617 09:59:04 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:00.617 09:59:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:07:00.617 09:59:04 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:04.826 Waiting for block devices as requested 00:07:04.826 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:07:04.826 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:07:04.826 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:07:04.826 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:07:04.826 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:07:04.826 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:07:05.087 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:07:05.087 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:07:05.087 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:07:05.348 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:07:05.348 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:07:05.608 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:07:05.608 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:07:05.608 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:07:05.608 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:07:05.869 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:07:05.869 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:07:06.130 09:59:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:06.130 09:59:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:07:06.130 09:59:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:07:06.130 09:59:09 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:07:06.130 09:59:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:07:06.130 09:59:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:07:06.130 09:59:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:07:06.131 09:59:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:07:06.131 09:59:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:07:06.131 09:59:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:07:06.131 09:59:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:07:06.131 09:59:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:06.131 09:59:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:06.131 09:59:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:07:06.131 09:59:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:06.131 09:59:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:06.131 09:59:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:07:06.131 09:59:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:06.131 09:59:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:06.131 09:59:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:06.131 09:59:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:06.131 09:59:09 -- common/autotest_common.sh@1541 -- # continue 00:07:06.131 09:59:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:06.131 09:59:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.131 09:59:09 -- common/autotest_common.sh@10 -- # set +x 00:07:06.131 09:59:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:06.131 09:59:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.131 09:59:09 -- common/autotest_common.sh@10 -- # set +x 00:07:06.131 09:59:09 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:10.340 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:10.340 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:07:10.600 09:59:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:10.600 09:59:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.600 09:59:13 -- common/autotest_common.sh@10 -- # set +x 00:07:10.600 09:59:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:10.600 09:59:13 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:10.600 09:59:13 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:10.600 09:59:13 -- common/autotest_common.sh@1561 -- # bdfs=() 00:07:10.600 09:59:13 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:07:10.600 09:59:14 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:07:10.600 09:59:14 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:07:10.600 09:59:14 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:07:10.600 09:59:14 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:10.600 09:59:14 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:10.600 09:59:14 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:10.600 09:59:14 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:10.600 09:59:14 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:10.600 09:59:14 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:10.600 09:59:14 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:07:10.600 09:59:14 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:10.600 09:59:14 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:07:10.600 09:59:14 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:07:10.600 09:59:14 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:07:10.600 09:59:14 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:07:10.861 09:59:14 -- common/autotest_common.sh@1570 -- # return 0 00:07:10.861 09:59:14 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:07:10.861 09:59:14 -- common/autotest_common.sh@1578 -- # return 0 00:07:10.861 09:59:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:10.861 09:59:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:10.861 09:59:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:10.861 09:59:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:10.861 09:59:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:10.861 09:59:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.861 09:59:14 -- common/autotest_common.sh@10 -- # set +x 00:07:10.861 09:59:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:10.861 09:59:14 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:10.861 09:59:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:10.861 09:59:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.861 09:59:14 -- common/autotest_common.sh@10 -- # set +x 00:07:10.861 ************************************ 00:07:10.861 START TEST env 00:07:10.861 ************************************ 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:10.862 * Looking for test storage... 00:07:10.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1691 -- # lcov --version 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:10.862 09:59:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.862 09:59:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.862 09:59:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.862 09:59:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.862 09:59:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.862 09:59:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.862 09:59:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.862 09:59:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.862 09:59:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.862 09:59:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.862 09:59:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.862 09:59:14 env -- scripts/common.sh@344 -- # case "$op" in 00:07:10.862 09:59:14 env -- scripts/common.sh@345 -- # : 1 00:07:10.862 09:59:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.862 09:59:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.862 09:59:14 env -- scripts/common.sh@365 -- # decimal 1 00:07:10.862 09:59:14 env -- scripts/common.sh@353 -- # local d=1 00:07:10.862 09:59:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.862 09:59:14 env -- scripts/common.sh@355 -- # echo 1 00:07:10.862 09:59:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.862 09:59:14 env -- scripts/common.sh@366 -- # decimal 2 00:07:10.862 09:59:14 env -- scripts/common.sh@353 -- # local d=2 00:07:10.862 09:59:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.862 09:59:14 env -- scripts/common.sh@355 -- # echo 2 00:07:10.862 09:59:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.862 09:59:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.862 09:59:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.862 09:59:14 env -- scripts/common.sh@368 -- # return 0 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:10.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.862 --rc genhtml_branch_coverage=1 00:07:10.862 --rc genhtml_function_coverage=1 00:07:10.862 --rc genhtml_legend=1 00:07:10.862 --rc geninfo_all_blocks=1 00:07:10.862 --rc geninfo_unexecuted_blocks=1 00:07:10.862 00:07:10.862 ' 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:10.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.862 --rc genhtml_branch_coverage=1 00:07:10.862 --rc genhtml_function_coverage=1 00:07:10.862 --rc genhtml_legend=1 00:07:10.862 --rc geninfo_all_blocks=1 00:07:10.862 --rc geninfo_unexecuted_blocks=1 00:07:10.862 00:07:10.862 ' 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:10.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.862 --rc genhtml_branch_coverage=1 00:07:10.862 --rc genhtml_function_coverage=1 00:07:10.862 --rc genhtml_legend=1 00:07:10.862 --rc geninfo_all_blocks=1 00:07:10.862 --rc geninfo_unexecuted_blocks=1 00:07:10.862 00:07:10.862 ' 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:10.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.862 --rc genhtml_branch_coverage=1 00:07:10.862 --rc genhtml_function_coverage=1 00:07:10.862 --rc genhtml_legend=1 00:07:10.862 --rc geninfo_all_blocks=1 00:07:10.862 --rc geninfo_unexecuted_blocks=1 00:07:10.862 00:07:10.862 ' 00:07:10.862 09:59:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:10.862 09:59:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.862 09:59:14 env -- common/autotest_common.sh@10 -- # set +x 00:07:11.123 ************************************ 00:07:11.123 START TEST env_memory 00:07:11.123 ************************************ 00:07:11.123 09:59:14 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:11.123 00:07:11.123 00:07:11.123 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.123 http://cunit.sourceforge.net/ 00:07:11.123 00:07:11.123 00:07:11.123 Suite: memory 00:07:11.123 Test: alloc and free memory map ...[2024-11-06 09:59:14.435120] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:11.123 passed 00:07:11.123 Test: mem map translation ...[2024-11-06 09:59:14.460488] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:11.123 [2024-11-06 09:59:14.460509] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:11.123 [2024-11-06 09:59:14.460555] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:11.123 [2024-11-06 09:59:14.460563] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:11.123 passed 00:07:11.123 Test: mem map registration ...[2024-11-06 09:59:14.515657] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:11.123 [2024-11-06 09:59:14.515679] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:11.123 passed 00:07:11.123 Test: mem map adjacent registrations ...passed 00:07:11.123 00:07:11.123 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.123 suites 1 1 n/a 0 0 00:07:11.123 tests 4 4 4 0 0 00:07:11.123 asserts 152 152 152 0 n/a 00:07:11.123 00:07:11.123 Elapsed time = 0.192 seconds 00:07:11.123 00:07:11.123 real 0m0.206s 00:07:11.123 user 0m0.197s 00:07:11.123 sys 0m0.009s 00:07:11.123 09:59:14 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.123 09:59:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:11.123 ************************************ 00:07:11.123 END TEST env_memory 00:07:11.123 ************************************ 00:07:11.385 09:59:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:11.385 09:59:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:11.385 09:59:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.385 09:59:14 env -- common/autotest_common.sh@10 -- # set +x 00:07:11.385 ************************************ 00:07:11.385 START TEST env_vtophys 00:07:11.385 ************************************ 00:07:11.385 09:59:14 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:11.385 EAL: lib.eal log level changed from notice to debug 00:07:11.385 EAL: Detected lcore 0 as core 0 on socket 0 00:07:11.385 EAL: Detected lcore 1 as core 1 on socket 0 00:07:11.385 EAL: Detected lcore 2 as core 2 on socket 0 00:07:11.385 EAL: Detected lcore 3 as core 3 on socket 0 00:07:11.385 EAL: Detected lcore 4 as core 4 on socket 0 00:07:11.385 EAL: Detected lcore 5 as core 5 on socket 0 00:07:11.385 EAL: Detected lcore 6 as core 6 on socket 0 00:07:11.385 EAL: Detected lcore 7 as core 7 on socket 0 00:07:11.385 EAL: Detected lcore 8 as core 8 on socket 0 00:07:11.385 EAL: Detected lcore 9 as core 9 on socket 0 00:07:11.385 EAL: Detected lcore 10 as core 10 on socket 0 00:07:11.385 EAL: Detected lcore 11 as core 11 on socket 0 00:07:11.385 EAL: Detected lcore 12 as core 12 on socket 0 00:07:11.385 EAL: Detected lcore 13 as core 13 on socket 0 00:07:11.385 EAL: Detected lcore 14 as core 14 on socket 0 00:07:11.385 EAL: Detected lcore 15 as core 15 on socket 0 00:07:11.385 EAL: Detected lcore 16 as core 16 on socket 0 00:07:11.385 EAL: Detected lcore 17 as core 17 on socket 0 00:07:11.385 EAL: Detected lcore 18 as core 18 on socket 0 00:07:11.385 EAL: Detected lcore 19 as core 19 on socket 0 00:07:11.385 EAL: Detected lcore 20 as core 20 on socket 0 00:07:11.385 EAL: Detected lcore 21 as core 21 on socket 0 00:07:11.385 EAL: Detected lcore 22 as core 22 on socket 0 00:07:11.385 EAL: Detected lcore 23 as core 23 on socket 0 00:07:11.385 EAL: Detected lcore 24 as core 24 on socket 0 00:07:11.385 EAL: Detected lcore 25 as core 25 on socket 0 00:07:11.385 EAL: Detected lcore 26 as core 26 on socket 0 00:07:11.385 EAL: Detected lcore 27 as core 27 on socket 0 00:07:11.385 EAL: Detected lcore 28 as core 28 on socket 0 00:07:11.385 EAL: Detected lcore 29 as core 29 on socket 0 00:07:11.385 EAL: Detected lcore 30 as core 30 on socket 0 00:07:11.385 EAL: Detected lcore 31 as core 31 on socket 0 00:07:11.385 EAL: Detected lcore 32 as core 32 on socket 0 00:07:11.385 EAL: Detected lcore 33 as core 33 on socket 0 00:07:11.385 EAL: Detected lcore 34 as core 34 on socket 0 00:07:11.385 EAL: Detected lcore 35 as core 35 on socket 0 00:07:11.385 EAL: Detected lcore 36 as core 0 on socket 1 00:07:11.385 EAL: Detected lcore 37 as core 1 on socket 1 00:07:11.385 EAL: Detected lcore 38 as core 2 on socket 1 00:07:11.385 EAL: Detected lcore 39 as core 3 on socket 1 00:07:11.385 EAL: Detected lcore 40 as core 4 on socket 1 00:07:11.385 EAL: Detected lcore 41 as core 5 on socket 1 00:07:11.385 EAL: Detected lcore 42 as core 6 on socket 1 00:07:11.385 EAL: Detected lcore 43 as core 7 on socket 1 00:07:11.385 EAL: Detected lcore 44 as core 8 on socket 1 00:07:11.385 EAL: Detected lcore 45 as core 9 on socket 1 00:07:11.385 EAL: Detected lcore 46 as core 10 on socket 1 00:07:11.385 EAL: Detected lcore 47 as core 11 on socket 1 00:07:11.385 EAL: Detected lcore 48 as core 12 on socket 1 00:07:11.385 EAL: Detected lcore 49 as core 13 on socket 1 00:07:11.385 EAL: Detected lcore 50 as core 14 on socket 1 00:07:11.385 EAL: Detected lcore 51 as core 15 on socket 1 00:07:11.385 EAL: Detected lcore 52 as core 16 on socket 1 00:07:11.385 EAL: Detected lcore 53 as core 17 on socket 1 00:07:11.385 EAL: Detected lcore 54 as core 18 on socket 1 00:07:11.385 EAL: Detected lcore 55 as core 19 on socket 1 00:07:11.385 EAL: Detected lcore 56 as core 20 on socket 1 00:07:11.385 EAL: Detected lcore 57 as core 21 on socket 1 00:07:11.385 EAL: Detected lcore 58 as core 22 on socket 1 00:07:11.385 EAL: Detected lcore 59 as core 23 on socket 1 00:07:11.385 EAL: Detected lcore 60 as core 24 on socket 1 00:07:11.385 EAL: Detected lcore 61 as core 25 on socket 1 00:07:11.385 EAL: Detected lcore 62 as core 26 on socket 1 00:07:11.385 EAL: Detected lcore 63 as core 27 on socket 1 00:07:11.385 EAL: Detected lcore 64 as core 28 on socket 1 00:07:11.385 EAL: Detected lcore 65 as core 29 on socket 1 00:07:11.385 EAL: Detected lcore 66 as core 30 on socket 1 00:07:11.385 EAL: Detected lcore 67 as core 31 on socket 1 00:07:11.385 EAL: Detected lcore 68 as core 32 on socket 1 00:07:11.385 EAL: Detected lcore 69 as core 33 on socket 1 00:07:11.385 EAL: Detected lcore 70 as core 34 on socket 1 00:07:11.385 EAL: Detected lcore 71 as core 35 on socket 1 00:07:11.385 EAL: Detected lcore 72 as core 0 on socket 0 00:07:11.385 EAL: Detected lcore 73 as core 1 on socket 0 00:07:11.385 EAL: Detected lcore 74 as core 2 on socket 0 00:07:11.385 EAL: Detected lcore 75 as core 3 on socket 0 00:07:11.385 EAL: Detected lcore 76 as core 4 on socket 0 00:07:11.385 EAL: Detected lcore 77 as core 5 on socket 0 00:07:11.385 EAL: Detected lcore 78 as core 6 on socket 0 00:07:11.385 EAL: Detected lcore 79 as core 7 on socket 0 00:07:11.385 EAL: Detected lcore 80 as core 8 on socket 0 00:07:11.385 EAL: Detected lcore 81 as core 9 on socket 0 00:07:11.385 EAL: Detected lcore 82 as core 10 on socket 0 00:07:11.385 EAL: Detected lcore 83 as core 11 on socket 0 00:07:11.385 EAL: Detected lcore 84 as core 12 on socket 0 00:07:11.385 EAL: Detected lcore 85 as core 13 on socket 0 00:07:11.385 EAL: Detected lcore 86 as core 14 on socket 0 00:07:11.385 EAL: Detected lcore 87 as core 15 on socket 0 00:07:11.385 EAL: Detected lcore 88 as core 16 on socket 0 00:07:11.385 EAL: Detected lcore 89 as core 17 on socket 0 00:07:11.385 EAL: Detected lcore 90 as core 18 on socket 0 00:07:11.385 EAL: Detected lcore 91 as core 19 on socket 0 00:07:11.385 EAL: Detected lcore 92 as core 20 on socket 0 00:07:11.385 EAL: Detected lcore 93 as core 21 on socket 0 00:07:11.385 EAL: Detected lcore 94 as core 22 on socket 0 00:07:11.385 EAL: Detected lcore 95 as core 23 on socket 0 00:07:11.385 EAL: Detected lcore 96 as core 24 on socket 0 00:07:11.385 EAL: Detected lcore 97 as core 25 on socket 0 00:07:11.385 EAL: Detected lcore 98 as core 26 on socket 0 00:07:11.385 EAL: Detected lcore 99 as core 27 on socket 0 00:07:11.385 EAL: Detected lcore 100 as core 28 on socket 0 00:07:11.385 EAL: Detected lcore 101 as core 29 on socket 0 00:07:11.385 EAL: Detected lcore 102 as core 30 on socket 0 00:07:11.385 EAL: Detected lcore 103 as core 31 on socket 0 00:07:11.385 EAL: Detected lcore 104 as core 32 on socket 0 00:07:11.385 EAL: Detected lcore 105 as core 33 on socket 0 00:07:11.385 EAL: Detected lcore 106 as core 34 on socket 0 00:07:11.385 EAL: Detected lcore 107 as core 35 on socket 0 00:07:11.385 EAL: Detected lcore 108 as core 0 on socket 1 00:07:11.385 EAL: Detected lcore 109 as core 1 on socket 1 00:07:11.385 EAL: Detected lcore 110 as core 2 on socket 1 00:07:11.385 EAL: Detected lcore 111 as core 3 on socket 1 00:07:11.385 EAL: Detected lcore 112 as core 4 on socket 1 00:07:11.385 EAL: Detected lcore 113 as core 5 on socket 1 00:07:11.385 EAL: Detected lcore 114 as core 6 on socket 1 00:07:11.385 EAL: Detected lcore 115 as core 7 on socket 1 00:07:11.385 EAL: Detected lcore 116 as core 8 on socket 1 00:07:11.385 EAL: Detected lcore 117 as core 9 on socket 1 00:07:11.385 EAL: Detected lcore 118 as core 10 on socket 1 00:07:11.385 EAL: Detected lcore 119 as core 11 on socket 1 00:07:11.385 EAL: Detected lcore 120 as core 12 on socket 1 00:07:11.385 EAL: Detected lcore 121 as core 13 on socket 1 00:07:11.385 EAL: Detected lcore 122 as core 14 on socket 1 00:07:11.385 EAL: Detected lcore 123 as core 15 on socket 1 00:07:11.386 EAL: Detected lcore 124 as core 16 on socket 1 00:07:11.386 EAL: Detected lcore 125 as core 17 on socket 1 00:07:11.386 EAL: Detected lcore 126 as core 18 on socket 1 00:07:11.386 EAL: Detected lcore 127 as core 19 on socket 1 00:07:11.386 EAL: Skipped lcore 128 as core 20 on socket 1 00:07:11.386 EAL: Skipped lcore 129 as core 21 on socket 1 00:07:11.386 EAL: Skipped lcore 130 as core 22 on socket 1 00:07:11.386 EAL: Skipped lcore 131 as core 23 on socket 1 00:07:11.386 EAL: Skipped lcore 132 as core 24 on socket 1 00:07:11.386 EAL: Skipped lcore 133 as core 25 on socket 1 00:07:11.386 EAL: Skipped lcore 134 as core 26 on socket 1 00:07:11.386 EAL: Skipped lcore 135 as core 27 on socket 1 00:07:11.386 EAL: Skipped lcore 136 as core 28 on socket 1 00:07:11.386 EAL: Skipped lcore 137 as core 29 on socket 1 00:07:11.386 EAL: Skipped lcore 138 as core 30 on socket 1 00:07:11.386 EAL: Skipped lcore 139 as core 31 on socket 1 00:07:11.386 EAL: Skipped lcore 140 as core 32 on socket 1 00:07:11.386 EAL: Skipped lcore 141 as core 33 on socket 1 00:07:11.386 EAL: Skipped lcore 142 as core 34 on socket 1 00:07:11.386 EAL: Skipped lcore 143 as core 35 on socket 1 00:07:11.386 EAL: Maximum logical cores by configuration: 128 00:07:11.386 EAL: Detected CPU lcores: 128 00:07:11.386 EAL: Detected NUMA nodes: 2 00:07:11.386 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:11.386 EAL: Detected shared linkage of DPDK 00:07:11.386 EAL: No shared files mode enabled, IPC will be disabled 00:07:11.386 EAL: Bus pci wants IOVA as 'DC' 00:07:11.386 EAL: Buses did not request a specific IOVA mode. 00:07:11.386 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:11.386 EAL: Selected IOVA mode 'VA' 00:07:11.386 EAL: Probing VFIO support... 00:07:11.386 EAL: IOMMU type 1 (Type 1) is supported 00:07:11.386 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:11.386 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:11.386 EAL: VFIO support initialized 00:07:11.386 EAL: Ask a virtual area of 0x2e000 bytes 00:07:11.386 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:11.386 EAL: Setting up physically contiguous memory... 00:07:11.386 EAL: Setting maximum number of open files to 524288 00:07:11.386 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:11.386 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:11.386 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:11.386 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.386 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:11.386 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.386 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.386 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:11.386 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:11.386 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.386 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:11.386 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.386 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.386 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:11.386 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:11.386 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.386 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:11.386 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.386 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.386 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:11.386 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:11.386 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.386 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:11.386 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.386 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.386 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:11.386 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:11.386 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:11.386 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.386 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:11.386 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:11.386 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.386 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:11.386 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:11.386 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.386 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:11.386 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:11.386 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.386 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:11.386 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:11.386 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.386 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:11.386 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:11.386 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.386 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:11.386 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:11.386 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.386 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:11.386 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:11.386 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.386 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:11.386 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:11.386 EAL: Hugepages will be freed exactly as allocated. 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: TSC frequency is ~2400000 KHz 00:07:11.386 EAL: Main lcore 0 is ready (tid=7f5fcceeda00;cpuset=[0]) 00:07:11.386 EAL: Trying to obtain current memory policy. 00:07:11.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.386 EAL: Restoring previous memory policy: 0 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was expanded by 2MB 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:11.386 EAL: Mem event callback 'spdk:(nil)' registered 00:07:11.386 00:07:11.386 00:07:11.386 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.386 http://cunit.sourceforge.net/ 00:07:11.386 00:07:11.386 00:07:11.386 Suite: components_suite 00:07:11.386 Test: vtophys_malloc_test ...passed 00:07:11.386 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:11.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.386 EAL: Restoring previous memory policy: 4 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was expanded by 4MB 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was shrunk by 4MB 00:07:11.386 EAL: Trying to obtain current memory policy. 00:07:11.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.386 EAL: Restoring previous memory policy: 4 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was expanded by 6MB 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was shrunk by 6MB 00:07:11.386 EAL: Trying to obtain current memory policy. 00:07:11.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.386 EAL: Restoring previous memory policy: 4 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was expanded by 10MB 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was shrunk by 10MB 00:07:11.386 EAL: Trying to obtain current memory policy. 00:07:11.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.386 EAL: Restoring previous memory policy: 4 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was expanded by 18MB 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was shrunk by 18MB 00:07:11.386 EAL: Trying to obtain current memory policy. 00:07:11.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.386 EAL: Restoring previous memory policy: 4 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was expanded by 34MB 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was shrunk by 34MB 00:07:11.386 EAL: Trying to obtain current memory policy. 00:07:11.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.386 EAL: Restoring previous memory policy: 4 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.386 EAL: No shared files mode enabled, IPC is disabled 00:07:11.386 EAL: Heap on socket 0 was expanded by 66MB 00:07:11.386 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.386 EAL: request: mp_malloc_sync 00:07:11.387 EAL: No shared files mode enabled, IPC is disabled 00:07:11.387 EAL: Heap on socket 0 was shrunk by 66MB 00:07:11.387 EAL: Trying to obtain current memory policy. 00:07:11.387 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.387 EAL: Restoring previous memory policy: 4 00:07:11.387 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.387 EAL: request: mp_malloc_sync 00:07:11.387 EAL: No shared files mode enabled, IPC is disabled 00:07:11.387 EAL: Heap on socket 0 was expanded by 130MB 00:07:11.387 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.387 EAL: request: mp_malloc_sync 00:07:11.387 EAL: No shared files mode enabled, IPC is disabled 00:07:11.387 EAL: Heap on socket 0 was shrunk by 130MB 00:07:11.387 EAL: Trying to obtain current memory policy. 00:07:11.387 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.387 EAL: Restoring previous memory policy: 4 00:07:11.387 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.387 EAL: request: mp_malloc_sync 00:07:11.387 EAL: No shared files mode enabled, IPC is disabled 00:07:11.387 EAL: Heap on socket 0 was expanded by 258MB 00:07:11.647 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.647 EAL: request: mp_malloc_sync 00:07:11.647 EAL: No shared files mode enabled, IPC is disabled 00:07:11.647 EAL: Heap on socket 0 was shrunk by 258MB 00:07:11.647 EAL: Trying to obtain current memory policy. 00:07:11.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.648 EAL: Restoring previous memory policy: 4 00:07:11.648 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.648 EAL: request: mp_malloc_sync 00:07:11.648 EAL: No shared files mode enabled, IPC is disabled 00:07:11.648 EAL: Heap on socket 0 was expanded by 514MB 00:07:11.648 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.648 EAL: request: mp_malloc_sync 00:07:11.648 EAL: No shared files mode enabled, IPC is disabled 00:07:11.648 EAL: Heap on socket 0 was shrunk by 514MB 00:07:11.648 EAL: Trying to obtain current memory policy. 00:07:11.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.908 EAL: Restoring previous memory policy: 4 00:07:11.908 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.908 EAL: request: mp_malloc_sync 00:07:11.908 EAL: No shared files mode enabled, IPC is disabled 00:07:11.908 EAL: Heap on socket 0 was expanded by 1026MB 00:07:11.908 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.170 EAL: request: mp_malloc_sync 00:07:12.170 EAL: No shared files mode enabled, IPC is disabled 00:07:12.170 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:12.170 passed 00:07:12.170 00:07:12.170 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.170 suites 1 1 n/a 0 0 00:07:12.170 tests 2 2 2 0 0 00:07:12.170 asserts 497 497 497 0 n/a 00:07:12.170 00:07:12.170 Elapsed time = 0.647 seconds 00:07:12.170 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.170 EAL: request: mp_malloc_sync 00:07:12.170 EAL: No shared files mode enabled, IPC is disabled 00:07:12.170 EAL: Heap on socket 0 was shrunk by 2MB 00:07:12.170 EAL: No shared files mode enabled, IPC is disabled 00:07:12.170 EAL: No shared files mode enabled, IPC is disabled 00:07:12.170 EAL: No shared files mode enabled, IPC is disabled 00:07:12.170 00:07:12.170 real 0m0.795s 00:07:12.170 user 0m0.421s 00:07:12.170 sys 0m0.336s 00:07:12.170 09:59:15 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.170 09:59:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 ************************************ 00:07:12.170 END TEST env_vtophys 00:07:12.170 ************************************ 00:07:12.170 09:59:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:12.170 09:59:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:12.170 09:59:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.170 09:59:15 env -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 ************************************ 00:07:12.170 START TEST env_pci 00:07:12.170 ************************************ 00:07:12.170 09:59:15 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:12.170 00:07:12.170 00:07:12.170 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.170 http://cunit.sourceforge.net/ 00:07:12.170 00:07:12.170 00:07:12.170 Suite: pci 00:07:12.170 Test: pci_hook ...[2024-11-06 09:59:15.553064] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3635424 has claimed it 00:07:12.170 EAL: Cannot find device (10000:00:01.0) 00:07:12.170 EAL: Failed to attach device on primary process 00:07:12.170 passed 00:07:12.170 00:07:12.170 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.170 suites 1 1 n/a 0 0 00:07:12.170 tests 1 1 1 0 0 00:07:12.170 asserts 25 25 25 0 n/a 00:07:12.170 00:07:12.170 Elapsed time = 0.035 seconds 00:07:12.170 00:07:12.170 real 0m0.055s 00:07:12.170 user 0m0.019s 00:07:12.170 sys 0m0.036s 00:07:12.170 09:59:15 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.170 09:59:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 ************************************ 00:07:12.170 END TEST env_pci 00:07:12.170 ************************************ 00:07:12.170 09:59:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:12.170 09:59:15 env -- env/env.sh@15 -- # uname 00:07:12.170 09:59:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:12.170 09:59:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:12.170 09:59:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:12.170 09:59:15 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:12.170 09:59:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.170 09:59:15 env -- common/autotest_common.sh@10 -- # set +x 00:07:12.431 ************************************ 00:07:12.431 START TEST env_dpdk_post_init 00:07:12.431 ************************************ 00:07:12.431 09:59:15 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:12.431 EAL: Detected CPU lcores: 128 00:07:12.431 EAL: Detected NUMA nodes: 2 00:07:12.431 EAL: Detected shared linkage of DPDK 00:07:12.431 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:12.431 EAL: Selected IOVA mode 'VA' 00:07:12.431 EAL: VFIO support initialized 00:07:12.431 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:12.431 EAL: Using IOMMU type 1 (Type 1) 00:07:12.431 EAL: Ignore mapping IO port bar(1) 00:07:12.692 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:07:12.692 EAL: Ignore mapping IO port bar(1) 00:07:12.952 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:07:12.952 EAL: Ignore mapping IO port bar(1) 00:07:13.213 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:07:13.213 EAL: Ignore mapping IO port bar(1) 00:07:13.213 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:07:13.475 EAL: Ignore mapping IO port bar(1) 00:07:13.475 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:07:13.736 EAL: Ignore mapping IO port bar(1) 00:07:13.736 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:07:13.997 EAL: Ignore mapping IO port bar(1) 00:07:13.997 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:07:14.257 EAL: Ignore mapping IO port bar(1) 00:07:14.257 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:07:14.531 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:07:14.531 EAL: Ignore mapping IO port bar(1) 00:07:14.791 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:07:14.791 EAL: Ignore mapping IO port bar(1) 00:07:14.791 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:07:15.051 EAL: Ignore mapping IO port bar(1) 00:07:15.051 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:07:15.312 EAL: Ignore mapping IO port bar(1) 00:07:15.312 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:07:15.573 EAL: Ignore mapping IO port bar(1) 00:07:15.573 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:07:15.573 EAL: Ignore mapping IO port bar(1) 00:07:15.833 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:07:15.833 EAL: Ignore mapping IO port bar(1) 00:07:16.093 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:07:16.093 EAL: Ignore mapping IO port bar(1) 00:07:16.354 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:07:16.354 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:07:16.354 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:07:16.354 Starting DPDK initialization... 00:07:16.354 Starting SPDK post initialization... 00:07:16.354 SPDK NVMe probe 00:07:16.354 Attaching to 0000:65:00.0 00:07:16.354 Attached to 0000:65:00.0 00:07:16.354 Cleaning up... 00:07:18.268 00:07:18.268 real 0m5.738s 00:07:18.268 user 0m0.110s 00:07:18.268 sys 0m0.179s 00:07:18.268 09:59:21 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.268 09:59:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:18.268 ************************************ 00:07:18.268 END TEST env_dpdk_post_init 00:07:18.268 ************************************ 00:07:18.268 09:59:21 env -- env/env.sh@26 -- # uname 00:07:18.268 09:59:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:18.268 09:59:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:18.268 09:59:21 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:18.268 09:59:21 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.268 09:59:21 env -- common/autotest_common.sh@10 -- # set +x 00:07:18.268 ************************************ 00:07:18.268 START TEST env_mem_callbacks 00:07:18.268 ************************************ 00:07:18.268 09:59:21 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:18.268 EAL: Detected CPU lcores: 128 00:07:18.268 EAL: Detected NUMA nodes: 2 00:07:18.268 EAL: Detected shared linkage of DPDK 00:07:18.268 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:18.268 EAL: Selected IOVA mode 'VA' 00:07:18.268 EAL: VFIO support initialized 00:07:18.268 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:18.268 00:07:18.268 00:07:18.268 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.268 http://cunit.sourceforge.net/ 00:07:18.268 00:07:18.268 00:07:18.268 Suite: memory 00:07:18.268 Test: test ... 00:07:18.268 register 0x200000200000 2097152 00:07:18.268 malloc 3145728 00:07:18.268 register 0x200000400000 4194304 00:07:18.268 buf 0x200000500000 len 3145728 PASSED 00:07:18.268 malloc 64 00:07:18.268 buf 0x2000004fff40 len 64 PASSED 00:07:18.268 malloc 4194304 00:07:18.268 register 0x200000800000 6291456 00:07:18.268 buf 0x200000a00000 len 4194304 PASSED 00:07:18.268 free 0x200000500000 3145728 00:07:18.268 free 0x2000004fff40 64 00:07:18.268 unregister 0x200000400000 4194304 PASSED 00:07:18.268 free 0x200000a00000 4194304 00:07:18.268 unregister 0x200000800000 6291456 PASSED 00:07:18.268 malloc 8388608 00:07:18.268 register 0x200000400000 10485760 00:07:18.268 buf 0x200000600000 len 8388608 PASSED 00:07:18.268 free 0x200000600000 8388608 00:07:18.268 unregister 0x200000400000 10485760 PASSED 00:07:18.268 passed 00:07:18.268 00:07:18.268 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.268 suites 1 1 n/a 0 0 00:07:18.268 tests 1 1 1 0 0 00:07:18.268 asserts 15 15 15 0 n/a 00:07:18.268 00:07:18.268 Elapsed time = 0.005 seconds 00:07:18.268 00:07:18.268 real 0m0.064s 00:07:18.268 user 0m0.014s 00:07:18.268 sys 0m0.051s 00:07:18.268 09:59:21 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.268 09:59:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:18.268 ************************************ 00:07:18.268 END TEST env_mem_callbacks 00:07:18.268 ************************************ 00:07:18.268 00:07:18.268 real 0m7.451s 00:07:18.269 user 0m1.035s 00:07:18.269 sys 0m0.958s 00:07:18.269 09:59:21 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.269 09:59:21 env -- common/autotest_common.sh@10 -- # set +x 00:07:18.269 ************************************ 00:07:18.269 END TEST env 00:07:18.269 ************************************ 00:07:18.269 09:59:21 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:18.269 09:59:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:18.269 09:59:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.269 09:59:21 -- common/autotest_common.sh@10 -- # set +x 00:07:18.269 ************************************ 00:07:18.269 START TEST rpc 00:07:18.269 ************************************ 00:07:18.269 09:59:21 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:18.530 * Looking for test storage... 00:07:18.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:18.530 09:59:21 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.530 09:59:21 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.530 09:59:21 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.530 09:59:21 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.530 09:59:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.530 09:59:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.530 09:59:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.530 09:59:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.530 09:59:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.530 09:59:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.530 09:59:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.531 09:59:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.531 09:59:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.531 09:59:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.531 09:59:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.531 09:59:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:18.531 09:59:21 rpc -- scripts/common.sh@345 -- # : 1 00:07:18.531 09:59:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.531 09:59:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.531 09:59:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:18.531 09:59:21 rpc -- scripts/common.sh@353 -- # local d=1 00:07:18.531 09:59:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.531 09:59:21 rpc -- scripts/common.sh@355 -- # echo 1 00:07:18.531 09:59:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.531 09:59:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:18.531 09:59:21 rpc -- scripts/common.sh@353 -- # local d=2 00:07:18.531 09:59:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.531 09:59:21 rpc -- scripts/common.sh@355 -- # echo 2 00:07:18.531 09:59:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.531 09:59:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.531 09:59:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.531 09:59:21 rpc -- scripts/common.sh@368 -- # return 0 00:07:18.531 09:59:21 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.531 09:59:21 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.531 --rc genhtml_branch_coverage=1 00:07:18.531 --rc genhtml_function_coverage=1 00:07:18.531 --rc genhtml_legend=1 00:07:18.531 --rc geninfo_all_blocks=1 00:07:18.531 --rc geninfo_unexecuted_blocks=1 00:07:18.531 00:07:18.531 ' 00:07:18.531 09:59:21 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.531 --rc genhtml_branch_coverage=1 00:07:18.531 --rc genhtml_function_coverage=1 00:07:18.531 --rc genhtml_legend=1 00:07:18.531 --rc geninfo_all_blocks=1 00:07:18.531 --rc geninfo_unexecuted_blocks=1 00:07:18.531 00:07:18.531 ' 00:07:18.531 09:59:21 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.531 --rc genhtml_branch_coverage=1 00:07:18.531 --rc genhtml_function_coverage=1 00:07:18.531 --rc genhtml_legend=1 00:07:18.531 --rc geninfo_all_blocks=1 00:07:18.531 --rc geninfo_unexecuted_blocks=1 00:07:18.531 00:07:18.531 ' 00:07:18.531 09:59:21 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.531 --rc genhtml_branch_coverage=1 00:07:18.531 --rc genhtml_function_coverage=1 00:07:18.531 --rc genhtml_legend=1 00:07:18.531 --rc geninfo_all_blocks=1 00:07:18.531 --rc geninfo_unexecuted_blocks=1 00:07:18.531 00:07:18.531 ' 00:07:18.531 09:59:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3636794 00:07:18.531 09:59:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.531 09:59:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3636794 00:07:18.531 09:59:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:18.531 09:59:21 rpc -- common/autotest_common.sh@833 -- # '[' -z 3636794 ']' 00:07:18.531 09:59:21 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.531 09:59:21 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:18.531 09:59:21 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.531 09:59:21 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:18.531 09:59:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.531 [2024-11-06 09:59:21.949447] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:18.531 [2024-11-06 09:59:21.949522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636794 ] 00:07:18.792 [2024-11-06 09:59:22.032219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.792 [2024-11-06 09:59:22.074120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:18.792 [2024-11-06 09:59:22.074155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3636794' to capture a snapshot of events at runtime. 00:07:18.792 [2024-11-06 09:59:22.074163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.792 [2024-11-06 09:59:22.074171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.792 [2024-11-06 09:59:22.074177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3636794 for offline analysis/debug. 00:07:18.792 [2024-11-06 09:59:22.074729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.364 09:59:22 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.364 09:59:22 rpc -- common/autotest_common.sh@866 -- # return 0 00:07:19.364 09:59:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:19.364 09:59:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:19.364 09:59:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:19.364 09:59:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:19.364 09:59:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:19.364 09:59:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.364 09:59:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.364 ************************************ 00:07:19.364 START TEST rpc_integrity 00:07:19.364 ************************************ 00:07:19.364 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:19.364 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:19.364 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.364 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.364 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.365 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:19.365 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:19.365 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:19.365 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:19.365 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.365 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.365 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.365 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:19.365 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:19.365 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.365 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.365 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.365 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:19.365 { 00:07:19.365 "name": "Malloc0", 00:07:19.365 "aliases": [ 00:07:19.365 "a7effcaa-7063-4dfd-a32f-ebb4ebedc570" 00:07:19.365 ], 00:07:19.365 "product_name": "Malloc disk", 00:07:19.365 "block_size": 512, 00:07:19.365 "num_blocks": 16384, 00:07:19.365 "uuid": "a7effcaa-7063-4dfd-a32f-ebb4ebedc570", 00:07:19.365 "assigned_rate_limits": { 00:07:19.365 "rw_ios_per_sec": 0, 00:07:19.365 "rw_mbytes_per_sec": 0, 00:07:19.365 "r_mbytes_per_sec": 0, 00:07:19.365 "w_mbytes_per_sec": 0 00:07:19.365 }, 00:07:19.365 "claimed": false, 00:07:19.365 "zoned": false, 00:07:19.365 "supported_io_types": { 00:07:19.365 "read": true, 00:07:19.365 "write": true, 00:07:19.365 "unmap": true, 00:07:19.365 "flush": true, 00:07:19.365 "reset": true, 00:07:19.365 "nvme_admin": false, 00:07:19.365 "nvme_io": false, 00:07:19.365 "nvme_io_md": false, 00:07:19.365 "write_zeroes": true, 00:07:19.365 "zcopy": true, 00:07:19.365 "get_zone_info": false, 00:07:19.365 "zone_management": false, 00:07:19.365 "zone_append": false, 00:07:19.365 "compare": false, 00:07:19.365 "compare_and_write": false, 00:07:19.365 "abort": true, 00:07:19.365 "seek_hole": false, 00:07:19.365 "seek_data": false, 00:07:19.365 "copy": true, 00:07:19.365 "nvme_iov_md": false 00:07:19.365 }, 00:07:19.365 "memory_domains": [ 00:07:19.365 { 00:07:19.365 "dma_device_id": "system", 00:07:19.365 "dma_device_type": 1 00:07:19.365 }, 00:07:19.365 { 00:07:19.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.365 "dma_device_type": 2 00:07:19.365 } 00:07:19.365 ], 00:07:19.365 "driver_specific": {} 00:07:19.365 } 00:07:19.365 ]' 00:07:19.365 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:19.626 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:19.626 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.626 [2024-11-06 09:59:22.894876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:19.626 [2024-11-06 09:59:22.894908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.626 [2024-11-06 09:59:22.894921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ab4b00 00:07:19.626 [2024-11-06 09:59:22.894929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.626 [2024-11-06 09:59:22.896291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.626 [2024-11-06 09:59:22.896312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:19.626 Passthru0 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.626 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.626 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:19.626 { 00:07:19.626 "name": "Malloc0", 00:07:19.626 "aliases": [ 00:07:19.626 "a7effcaa-7063-4dfd-a32f-ebb4ebedc570" 00:07:19.626 ], 00:07:19.626 "product_name": "Malloc disk", 00:07:19.626 "block_size": 512, 00:07:19.626 "num_blocks": 16384, 00:07:19.626 "uuid": "a7effcaa-7063-4dfd-a32f-ebb4ebedc570", 00:07:19.626 "assigned_rate_limits": { 00:07:19.626 "rw_ios_per_sec": 0, 00:07:19.626 "rw_mbytes_per_sec": 0, 00:07:19.626 "r_mbytes_per_sec": 0, 00:07:19.626 "w_mbytes_per_sec": 0 00:07:19.626 }, 00:07:19.626 "claimed": true, 00:07:19.626 "claim_type": "exclusive_write", 00:07:19.626 "zoned": false, 00:07:19.626 "supported_io_types": { 00:07:19.626 "read": true, 00:07:19.626 "write": true, 00:07:19.626 "unmap": true, 00:07:19.626 "flush": true, 00:07:19.626 "reset": true, 00:07:19.626 "nvme_admin": false, 00:07:19.626 "nvme_io": false, 00:07:19.626 "nvme_io_md": false, 00:07:19.626 "write_zeroes": true, 00:07:19.626 "zcopy": true, 00:07:19.626 "get_zone_info": false, 00:07:19.626 "zone_management": false, 00:07:19.626 "zone_append": false, 00:07:19.626 "compare": false, 00:07:19.626 "compare_and_write": false, 00:07:19.626 "abort": true, 00:07:19.626 "seek_hole": false, 00:07:19.626 "seek_data": false, 00:07:19.626 "copy": true, 00:07:19.626 "nvme_iov_md": false 00:07:19.626 }, 00:07:19.626 "memory_domains": [ 00:07:19.626 { 00:07:19.626 "dma_device_id": "system", 00:07:19.626 "dma_device_type": 1 00:07:19.626 }, 00:07:19.626 { 00:07:19.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.626 "dma_device_type": 2 00:07:19.626 } 00:07:19.626 ], 00:07:19.626 "driver_specific": {} 00:07:19.626 }, 00:07:19.626 { 00:07:19.626 "name": "Passthru0", 00:07:19.626 "aliases": [ 00:07:19.626 "8444d5ae-4692-5820-b191-43cd20c9c80d" 00:07:19.626 ], 00:07:19.626 "product_name": "passthru", 00:07:19.626 "block_size": 512, 00:07:19.626 "num_blocks": 16384, 00:07:19.626 "uuid": "8444d5ae-4692-5820-b191-43cd20c9c80d", 00:07:19.626 "assigned_rate_limits": { 00:07:19.626 "rw_ios_per_sec": 0, 00:07:19.626 "rw_mbytes_per_sec": 0, 00:07:19.626 "r_mbytes_per_sec": 0, 00:07:19.626 "w_mbytes_per_sec": 0 00:07:19.626 }, 00:07:19.626 "claimed": false, 00:07:19.626 "zoned": false, 00:07:19.626 "supported_io_types": { 00:07:19.626 "read": true, 00:07:19.626 "write": true, 00:07:19.626 "unmap": true, 00:07:19.626 "flush": true, 00:07:19.626 "reset": true, 00:07:19.626 "nvme_admin": false, 00:07:19.626 "nvme_io": false, 00:07:19.626 "nvme_io_md": false, 00:07:19.626 "write_zeroes": true, 00:07:19.626 "zcopy": true, 00:07:19.626 "get_zone_info": false, 00:07:19.626 "zone_management": false, 00:07:19.626 "zone_append": false, 00:07:19.626 "compare": false, 00:07:19.626 "compare_and_write": false, 00:07:19.626 "abort": true, 00:07:19.626 "seek_hole": false, 00:07:19.626 "seek_data": false, 00:07:19.626 "copy": true, 00:07:19.626 "nvme_iov_md": false 00:07:19.626 }, 00:07:19.626 "memory_domains": [ 00:07:19.626 { 00:07:19.626 "dma_device_id": "system", 00:07:19.626 "dma_device_type": 1 00:07:19.626 }, 00:07:19.626 { 00:07:19.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.626 "dma_device_type": 2 00:07:19.626 } 00:07:19.626 ], 00:07:19.626 "driver_specific": { 00:07:19.626 "passthru": { 00:07:19.626 "name": "Passthru0", 00:07:19.626 "base_bdev_name": "Malloc0" 00:07:19.626 } 00:07:19.626 } 00:07:19.626 } 00:07:19.626 ]' 00:07:19.626 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:19.626 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:19.626 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.626 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.626 09:59:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:19.626 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.627 09:59:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.627 09:59:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.627 09:59:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:19.627 09:59:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:19.627 09:59:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:19.627 00:07:19.627 real 0m0.289s 00:07:19.627 user 0m0.188s 00:07:19.627 sys 0m0.039s 00:07:19.627 09:59:23 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:19.627 09:59:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.627 ************************************ 00:07:19.627 END TEST rpc_integrity 00:07:19.627 ************************************ 00:07:19.627 09:59:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:19.627 09:59:23 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:19.627 09:59:23 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.627 09:59:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.888 ************************************ 00:07:19.888 START TEST rpc_plugins 00:07:19.888 ************************************ 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:07:19.888 09:59:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.888 09:59:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:19.888 09:59:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.888 09:59:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:19.888 { 00:07:19.888 "name": "Malloc1", 00:07:19.888 "aliases": [ 00:07:19.888 "5e1d5f16-bf7c-4eb9-b055-b623481fbc81" 00:07:19.888 ], 00:07:19.888 "product_name": "Malloc disk", 00:07:19.888 "block_size": 4096, 00:07:19.888 "num_blocks": 256, 00:07:19.888 "uuid": "5e1d5f16-bf7c-4eb9-b055-b623481fbc81", 00:07:19.888 "assigned_rate_limits": { 00:07:19.888 "rw_ios_per_sec": 0, 00:07:19.888 "rw_mbytes_per_sec": 0, 00:07:19.888 "r_mbytes_per_sec": 0, 00:07:19.888 "w_mbytes_per_sec": 0 00:07:19.888 }, 00:07:19.888 "claimed": false, 00:07:19.888 "zoned": false, 00:07:19.888 "supported_io_types": { 00:07:19.888 "read": true, 00:07:19.888 "write": true, 00:07:19.888 "unmap": true, 00:07:19.888 "flush": true, 00:07:19.888 "reset": true, 00:07:19.888 "nvme_admin": false, 00:07:19.888 "nvme_io": false, 00:07:19.888 "nvme_io_md": false, 00:07:19.888 "write_zeroes": true, 00:07:19.888 "zcopy": true, 00:07:19.888 "get_zone_info": false, 00:07:19.888 "zone_management": false, 00:07:19.888 "zone_append": false, 00:07:19.888 "compare": false, 00:07:19.888 "compare_and_write": false, 00:07:19.888 "abort": true, 00:07:19.888 "seek_hole": false, 00:07:19.888 "seek_data": false, 00:07:19.888 "copy": true, 00:07:19.888 "nvme_iov_md": false 00:07:19.888 }, 00:07:19.888 "memory_domains": [ 00:07:19.888 { 00:07:19.888 "dma_device_id": "system", 00:07:19.888 "dma_device_type": 1 00:07:19.888 }, 00:07:19.888 { 00:07:19.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.888 "dma_device_type": 2 00:07:19.888 } 00:07:19.888 ], 00:07:19.888 "driver_specific": {} 00:07:19.888 } 00:07:19.888 ]' 00:07:19.888 09:59:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:19.888 09:59:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:19.888 09:59:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.888 09:59:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.888 09:59:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:19.888 09:59:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:19.888 09:59:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:19.888 00:07:19.888 real 0m0.151s 00:07:19.888 user 0m0.092s 00:07:19.888 sys 0m0.024s 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:19.888 09:59:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:19.888 ************************************ 00:07:19.888 END TEST rpc_plugins 00:07:19.888 ************************************ 00:07:19.888 09:59:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:19.888 09:59:23 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:19.888 09:59:23 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.888 09:59:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.888 ************************************ 00:07:19.888 START TEST rpc_trace_cmd_test 00:07:19.888 ************************************ 00:07:19.888 09:59:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:07:19.888 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:19.888 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:19.888 09:59:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.888 09:59:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.888 09:59:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.888 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:19.888 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3636794", 00:07:19.888 "tpoint_group_mask": "0x8", 00:07:19.888 "iscsi_conn": { 00:07:19.888 "mask": "0x2", 00:07:19.888 "tpoint_mask": "0x0" 00:07:19.888 }, 00:07:19.888 "scsi": { 00:07:19.888 "mask": "0x4", 00:07:19.888 "tpoint_mask": "0x0" 00:07:19.888 }, 00:07:19.888 "bdev": { 00:07:19.888 "mask": "0x8", 00:07:19.888 "tpoint_mask": "0xffffffffffffffff" 00:07:19.888 }, 00:07:19.888 "nvmf_rdma": { 00:07:19.888 "mask": "0x10", 00:07:19.888 "tpoint_mask": "0x0" 00:07:19.888 }, 00:07:19.888 "nvmf_tcp": { 00:07:19.888 "mask": "0x20", 00:07:19.888 "tpoint_mask": "0x0" 00:07:19.888 }, 00:07:19.888 "ftl": { 00:07:19.888 "mask": "0x40", 00:07:19.888 "tpoint_mask": "0x0" 00:07:19.888 }, 00:07:19.888 "blobfs": { 00:07:19.888 "mask": "0x80", 00:07:19.888 "tpoint_mask": "0x0" 00:07:19.888 }, 00:07:19.888 "dsa": { 00:07:19.889 "mask": "0x200", 00:07:19.889 "tpoint_mask": "0x0" 00:07:19.889 }, 00:07:19.889 "thread": { 00:07:19.889 "mask": "0x400", 00:07:19.889 "tpoint_mask": "0x0" 00:07:19.889 }, 00:07:19.889 "nvme_pcie": { 00:07:19.889 "mask": "0x800", 00:07:19.889 "tpoint_mask": "0x0" 00:07:19.889 }, 00:07:19.889 "iaa": { 00:07:19.889 "mask": "0x1000", 00:07:19.889 "tpoint_mask": "0x0" 00:07:19.889 }, 00:07:19.889 "nvme_tcp": { 00:07:19.889 "mask": "0x2000", 00:07:19.889 "tpoint_mask": "0x0" 00:07:19.889 }, 00:07:19.889 "bdev_nvme": { 00:07:19.889 "mask": "0x4000", 00:07:19.889 "tpoint_mask": "0x0" 00:07:19.889 }, 00:07:19.889 "sock": { 00:07:19.889 "mask": "0x8000", 00:07:19.889 "tpoint_mask": "0x0" 00:07:19.889 }, 00:07:19.889 "blob": { 00:07:19.889 "mask": "0x10000", 00:07:19.889 "tpoint_mask": "0x0" 00:07:19.889 }, 00:07:19.889 "bdev_raid": { 00:07:19.889 "mask": "0x20000", 00:07:19.889 "tpoint_mask": "0x0" 00:07:19.889 }, 00:07:19.889 "scheduler": { 00:07:19.889 "mask": "0x40000", 00:07:19.889 "tpoint_mask": "0x0" 00:07:19.889 } 00:07:19.889 }' 00:07:19.889 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:20.150 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:20.150 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:20.150 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:20.150 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:20.150 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:20.150 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:20.150 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:20.150 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:20.150 09:59:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:20.150 00:07:20.150 real 0m0.251s 00:07:20.150 user 0m0.212s 00:07:20.150 sys 0m0.031s 00:07:20.150 09:59:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:20.150 09:59:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.150 ************************************ 00:07:20.150 END TEST rpc_trace_cmd_test 00:07:20.150 ************************************ 00:07:20.150 09:59:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:20.150 09:59:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:20.150 09:59:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:20.150 09:59:23 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:20.150 09:59:23 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:20.150 09:59:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.410 ************************************ 00:07:20.410 START TEST rpc_daemon_integrity 00:07:20.410 ************************************ 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:20.410 { 00:07:20.410 "name": "Malloc2", 00:07:20.410 "aliases": [ 00:07:20.410 "e1b1cb46-3c95-4e94-9808-bfc04186c12c" 00:07:20.410 ], 00:07:20.410 "product_name": "Malloc disk", 00:07:20.410 "block_size": 512, 00:07:20.410 "num_blocks": 16384, 00:07:20.410 "uuid": "e1b1cb46-3c95-4e94-9808-bfc04186c12c", 00:07:20.410 "assigned_rate_limits": { 00:07:20.410 "rw_ios_per_sec": 0, 00:07:20.410 "rw_mbytes_per_sec": 0, 00:07:20.410 "r_mbytes_per_sec": 0, 00:07:20.410 "w_mbytes_per_sec": 0 00:07:20.410 }, 00:07:20.410 "claimed": false, 00:07:20.410 "zoned": false, 00:07:20.410 "supported_io_types": { 00:07:20.410 "read": true, 00:07:20.410 "write": true, 00:07:20.410 "unmap": true, 00:07:20.410 "flush": true, 00:07:20.410 "reset": true, 00:07:20.410 "nvme_admin": false, 00:07:20.410 "nvme_io": false, 00:07:20.410 "nvme_io_md": false, 00:07:20.410 "write_zeroes": true, 00:07:20.410 "zcopy": true, 00:07:20.410 "get_zone_info": false, 00:07:20.410 "zone_management": false, 00:07:20.410 "zone_append": false, 00:07:20.410 "compare": false, 00:07:20.410 "compare_and_write": false, 00:07:20.410 "abort": true, 00:07:20.410 "seek_hole": false, 00:07:20.410 "seek_data": false, 00:07:20.410 "copy": true, 00:07:20.410 "nvme_iov_md": false 00:07:20.410 }, 00:07:20.410 "memory_domains": [ 00:07:20.410 { 00:07:20.410 "dma_device_id": "system", 00:07:20.410 "dma_device_type": 1 00:07:20.410 }, 00:07:20.410 { 00:07:20.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.410 "dma_device_type": 2 00:07:20.410 } 00:07:20.410 ], 00:07:20.410 "driver_specific": {} 00:07:20.410 } 00:07:20.410 ]' 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.410 [2024-11-06 09:59:23.825388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:20.410 [2024-11-06 09:59:23.825417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.410 [2024-11-06 09:59:23.825431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b464f0 00:07:20.410 [2024-11-06 09:59:23.825439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.410 [2024-11-06 09:59:23.826706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.410 [2024-11-06 09:59:23.826727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:20.410 Passthru0 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.410 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:20.410 { 00:07:20.410 "name": "Malloc2", 00:07:20.410 "aliases": [ 00:07:20.410 "e1b1cb46-3c95-4e94-9808-bfc04186c12c" 00:07:20.410 ], 00:07:20.410 "product_name": "Malloc disk", 00:07:20.410 "block_size": 512, 00:07:20.410 "num_blocks": 16384, 00:07:20.410 "uuid": "e1b1cb46-3c95-4e94-9808-bfc04186c12c", 00:07:20.410 "assigned_rate_limits": { 00:07:20.410 "rw_ios_per_sec": 0, 00:07:20.410 "rw_mbytes_per_sec": 0, 00:07:20.410 "r_mbytes_per_sec": 0, 00:07:20.410 "w_mbytes_per_sec": 0 00:07:20.410 }, 00:07:20.410 "claimed": true, 00:07:20.410 "claim_type": "exclusive_write", 00:07:20.410 "zoned": false, 00:07:20.410 "supported_io_types": { 00:07:20.410 "read": true, 00:07:20.410 "write": true, 00:07:20.410 "unmap": true, 00:07:20.410 "flush": true, 00:07:20.410 "reset": true, 00:07:20.410 "nvme_admin": false, 00:07:20.410 "nvme_io": false, 00:07:20.410 "nvme_io_md": false, 00:07:20.410 "write_zeroes": true, 00:07:20.410 "zcopy": true, 00:07:20.410 "get_zone_info": false, 00:07:20.410 "zone_management": false, 00:07:20.410 "zone_append": false, 00:07:20.410 "compare": false, 00:07:20.410 "compare_and_write": false, 00:07:20.410 "abort": true, 00:07:20.410 "seek_hole": false, 00:07:20.410 "seek_data": false, 00:07:20.410 "copy": true, 00:07:20.410 "nvme_iov_md": false 00:07:20.410 }, 00:07:20.410 "memory_domains": [ 00:07:20.410 { 00:07:20.410 "dma_device_id": "system", 00:07:20.410 "dma_device_type": 1 00:07:20.410 }, 00:07:20.410 { 00:07:20.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.410 "dma_device_type": 2 00:07:20.410 } 00:07:20.410 ], 00:07:20.410 "driver_specific": {} 00:07:20.410 }, 00:07:20.410 { 00:07:20.410 "name": "Passthru0", 00:07:20.410 "aliases": [ 00:07:20.410 "36e415bb-c950-5d1c-a6e8-eb39fa4302e2" 00:07:20.410 ], 00:07:20.410 "product_name": "passthru", 00:07:20.410 "block_size": 512, 00:07:20.410 "num_blocks": 16384, 00:07:20.410 "uuid": "36e415bb-c950-5d1c-a6e8-eb39fa4302e2", 00:07:20.410 "assigned_rate_limits": { 00:07:20.410 "rw_ios_per_sec": 0, 00:07:20.410 "rw_mbytes_per_sec": 0, 00:07:20.410 "r_mbytes_per_sec": 0, 00:07:20.410 "w_mbytes_per_sec": 0 00:07:20.410 }, 00:07:20.410 "claimed": false, 00:07:20.410 "zoned": false, 00:07:20.410 "supported_io_types": { 00:07:20.410 "read": true, 00:07:20.410 "write": true, 00:07:20.410 "unmap": true, 00:07:20.410 "flush": true, 00:07:20.410 "reset": true, 00:07:20.410 "nvme_admin": false, 00:07:20.410 "nvme_io": false, 00:07:20.410 "nvme_io_md": false, 00:07:20.410 "write_zeroes": true, 00:07:20.410 "zcopy": true, 00:07:20.410 "get_zone_info": false, 00:07:20.410 "zone_management": false, 00:07:20.410 "zone_append": false, 00:07:20.410 "compare": false, 00:07:20.410 "compare_and_write": false, 00:07:20.410 "abort": true, 00:07:20.410 "seek_hole": false, 00:07:20.411 "seek_data": false, 00:07:20.411 "copy": true, 00:07:20.411 "nvme_iov_md": false 00:07:20.411 }, 00:07:20.411 "memory_domains": [ 00:07:20.411 { 00:07:20.411 "dma_device_id": "system", 00:07:20.411 "dma_device_type": 1 00:07:20.411 }, 00:07:20.411 { 00:07:20.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.411 "dma_device_type": 2 00:07:20.411 } 00:07:20.411 ], 00:07:20.411 "driver_specific": { 00:07:20.411 "passthru": { 00:07:20.411 "name": "Passthru0", 00:07:20.411 "base_bdev_name": "Malloc2" 00:07:20.411 } 00:07:20.411 } 00:07:20.411 } 00:07:20.411 ]' 00:07:20.411 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:20.411 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:20.411 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:20.411 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.411 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:20.670 00:07:20.670 real 0m0.305s 00:07:20.670 user 0m0.199s 00:07:20.670 sys 0m0.035s 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:20.670 09:59:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.670 ************************************ 00:07:20.670 END TEST rpc_daemon_integrity 00:07:20.670 ************************************ 00:07:20.670 09:59:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:20.670 09:59:24 rpc -- rpc/rpc.sh@84 -- # killprocess 3636794 00:07:20.670 09:59:24 rpc -- common/autotest_common.sh@952 -- # '[' -z 3636794 ']' 00:07:20.670 09:59:24 rpc -- common/autotest_common.sh@956 -- # kill -0 3636794 00:07:20.670 09:59:24 rpc -- common/autotest_common.sh@957 -- # uname 00:07:20.670 09:59:24 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:20.670 09:59:24 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3636794 00:07:20.670 09:59:24 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:20.670 09:59:24 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:20.670 09:59:24 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3636794' 00:07:20.670 killing process with pid 3636794 00:07:20.670 09:59:24 rpc -- common/autotest_common.sh@971 -- # kill 3636794 00:07:20.670 09:59:24 rpc -- common/autotest_common.sh@976 -- # wait 3636794 00:07:20.930 00:07:20.930 real 0m2.609s 00:07:20.930 user 0m3.409s 00:07:20.930 sys 0m0.725s 00:07:20.930 09:59:24 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:20.930 09:59:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.930 ************************************ 00:07:20.930 END TEST rpc 00:07:20.930 ************************************ 00:07:20.930 09:59:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:20.930 09:59:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:20.930 09:59:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:20.930 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:07:20.930 ************************************ 00:07:20.930 START TEST skip_rpc 00:07:20.930 ************************************ 00:07:20.930 09:59:24 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:21.191 * Looking for test storage... 00:07:21.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.191 09:59:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:21.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.191 --rc genhtml_branch_coverage=1 00:07:21.191 --rc genhtml_function_coverage=1 00:07:21.191 --rc genhtml_legend=1 00:07:21.191 --rc geninfo_all_blocks=1 00:07:21.191 --rc geninfo_unexecuted_blocks=1 00:07:21.191 00:07:21.191 ' 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:21.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.191 --rc genhtml_branch_coverage=1 00:07:21.191 --rc genhtml_function_coverage=1 00:07:21.191 --rc genhtml_legend=1 00:07:21.191 --rc geninfo_all_blocks=1 00:07:21.191 --rc geninfo_unexecuted_blocks=1 00:07:21.191 00:07:21.191 ' 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:21.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.191 --rc genhtml_branch_coverage=1 00:07:21.191 --rc genhtml_function_coverage=1 00:07:21.191 --rc genhtml_legend=1 00:07:21.191 --rc geninfo_all_blocks=1 00:07:21.191 --rc geninfo_unexecuted_blocks=1 00:07:21.191 00:07:21.191 ' 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:21.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.191 --rc genhtml_branch_coverage=1 00:07:21.191 --rc genhtml_function_coverage=1 00:07:21.191 --rc genhtml_legend=1 00:07:21.191 --rc geninfo_all_blocks=1 00:07:21.191 --rc geninfo_unexecuted_blocks=1 00:07:21.191 00:07:21.191 ' 00:07:21.191 09:59:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:21.191 09:59:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:21.191 09:59:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.191 09:59:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.191 ************************************ 00:07:21.192 START TEST skip_rpc 00:07:21.192 ************************************ 00:07:21.192 09:59:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:07:21.192 09:59:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3637436 00:07:21.192 09:59:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:21.192 09:59:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:21.192 09:59:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:21.192 [2024-11-06 09:59:24.652275] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:21.192 [2024-11-06 09:59:24.652321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637436 ] 00:07:21.452 [2024-11-06 09:59:24.730361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.452 [2024-11-06 09:59:24.767546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3637436 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3637436 ']' 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3637436 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3637436 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3637436' 00:07:26.735 killing process with pid 3637436 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3637436 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3637436 00:07:26.735 00:07:26.735 real 0m5.286s 00:07:26.735 user 0m5.090s 00:07:26.735 sys 0m0.245s 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.735 09:59:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.735 ************************************ 00:07:26.735 END TEST skip_rpc 00:07:26.735 ************************************ 00:07:26.735 09:59:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:26.735 09:59:29 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.735 09:59:29 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.735 09:59:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.735 ************************************ 00:07:26.735 START TEST skip_rpc_with_json 00:07:26.735 ************************************ 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3638578 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3638578 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3638578 ']' 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.735 09:59:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:26.735 [2024-11-06 09:59:30.017501] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:26.735 [2024-11-06 09:59:30.017559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638578 ] 00:07:26.735 [2024-11-06 09:59:30.103876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.735 [2024-11-06 09:59:30.144205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:27.676 [2024-11-06 09:59:30.830733] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:27.676 request: 00:07:27.676 { 00:07:27.676 "trtype": "tcp", 00:07:27.676 "method": "nvmf_get_transports", 00:07:27.676 "req_id": 1 00:07:27.676 } 00:07:27.676 Got JSON-RPC error response 00:07:27.676 response: 00:07:27.676 { 00:07:27.676 "code": -19, 00:07:27.676 "message": "No such device" 00:07:27.676 } 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:27.676 [2024-11-06 09:59:30.842859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.676 09:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:27.676 09:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.676 09:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:27.676 { 00:07:27.676 "subsystems": [ 00:07:27.676 { 00:07:27.676 "subsystem": "fsdev", 00:07:27.676 "config": [ 00:07:27.676 { 00:07:27.676 "method": "fsdev_set_opts", 00:07:27.676 "params": { 00:07:27.676 "fsdev_io_pool_size": 65535, 00:07:27.676 "fsdev_io_cache_size": 256 00:07:27.676 } 00:07:27.676 } 00:07:27.676 ] 00:07:27.676 }, 00:07:27.676 { 00:07:27.676 "subsystem": "vfio_user_target", 00:07:27.676 "config": null 00:07:27.676 }, 00:07:27.676 { 00:07:27.676 "subsystem": "keyring", 00:07:27.676 "config": [] 00:07:27.676 }, 00:07:27.676 { 00:07:27.676 "subsystem": "iobuf", 00:07:27.676 "config": [ 00:07:27.676 { 00:07:27.676 "method": "iobuf_set_options", 00:07:27.676 "params": { 00:07:27.676 "small_pool_count": 8192, 00:07:27.676 "large_pool_count": 1024, 00:07:27.676 "small_bufsize": 8192, 00:07:27.676 "large_bufsize": 135168, 00:07:27.676 "enable_numa": false 00:07:27.676 } 00:07:27.676 } 00:07:27.676 ] 00:07:27.676 }, 00:07:27.676 { 00:07:27.676 "subsystem": "sock", 00:07:27.676 "config": [ 00:07:27.676 { 00:07:27.676 "method": "sock_set_default_impl", 00:07:27.676 "params": { 00:07:27.676 "impl_name": "posix" 00:07:27.676 } 00:07:27.676 }, 00:07:27.676 { 00:07:27.676 "method": "sock_impl_set_options", 00:07:27.676 "params": { 00:07:27.676 "impl_name": "ssl", 00:07:27.676 "recv_buf_size": 4096, 00:07:27.676 "send_buf_size": 4096, 00:07:27.676 "enable_recv_pipe": true, 00:07:27.676 "enable_quickack": false, 00:07:27.676 "enable_placement_id": 0, 00:07:27.676 "enable_zerocopy_send_server": true, 00:07:27.676 "enable_zerocopy_send_client": false, 00:07:27.676 "zerocopy_threshold": 0, 00:07:27.676 "tls_version": 0, 00:07:27.676 "enable_ktls": false 00:07:27.676 } 00:07:27.676 }, 00:07:27.676 { 00:07:27.676 "method": "sock_impl_set_options", 00:07:27.676 "params": { 00:07:27.676 "impl_name": "posix", 00:07:27.676 "recv_buf_size": 2097152, 00:07:27.676 "send_buf_size": 2097152, 00:07:27.676 "enable_recv_pipe": true, 00:07:27.676 "enable_quickack": false, 00:07:27.676 "enable_placement_id": 0, 00:07:27.676 "enable_zerocopy_send_server": true, 00:07:27.676 "enable_zerocopy_send_client": false, 00:07:27.676 "zerocopy_threshold": 0, 00:07:27.676 "tls_version": 0, 00:07:27.676 "enable_ktls": false 00:07:27.676 } 00:07:27.676 } 00:07:27.676 ] 00:07:27.676 }, 00:07:27.676 { 00:07:27.676 "subsystem": "vmd", 00:07:27.676 "config": [] 00:07:27.676 }, 00:07:27.676 { 00:07:27.676 "subsystem": "accel", 00:07:27.676 "config": [ 00:07:27.676 { 00:07:27.676 "method": "accel_set_options", 00:07:27.676 "params": { 00:07:27.676 "small_cache_size": 128, 00:07:27.676 "large_cache_size": 16, 00:07:27.676 "task_count": 2048, 00:07:27.676 "sequence_count": 2048, 00:07:27.676 "buf_count": 2048 00:07:27.676 } 00:07:27.676 } 00:07:27.676 ] 00:07:27.676 }, 00:07:27.676 { 00:07:27.676 "subsystem": "bdev", 00:07:27.676 "config": [ 00:07:27.676 { 00:07:27.676 "method": "bdev_set_options", 00:07:27.676 "params": { 00:07:27.677 "bdev_io_pool_size": 65535, 00:07:27.677 "bdev_io_cache_size": 256, 00:07:27.677 "bdev_auto_examine": true, 00:07:27.677 "iobuf_small_cache_size": 128, 00:07:27.677 "iobuf_large_cache_size": 16 00:07:27.677 } 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "method": "bdev_raid_set_options", 00:07:27.677 "params": { 00:07:27.677 "process_window_size_kb": 1024, 00:07:27.677 "process_max_bandwidth_mb_sec": 0 00:07:27.677 } 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "method": "bdev_iscsi_set_options", 00:07:27.677 "params": { 00:07:27.677 "timeout_sec": 30 00:07:27.677 } 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "method": "bdev_nvme_set_options", 00:07:27.677 "params": { 00:07:27.677 "action_on_timeout": "none", 00:07:27.677 "timeout_us": 0, 00:07:27.677 "timeout_admin_us": 0, 00:07:27.677 "keep_alive_timeout_ms": 10000, 00:07:27.677 "arbitration_burst": 0, 00:07:27.677 "low_priority_weight": 0, 00:07:27.677 "medium_priority_weight": 0, 00:07:27.677 "high_priority_weight": 0, 00:07:27.677 "nvme_adminq_poll_period_us": 10000, 00:07:27.677 "nvme_ioq_poll_period_us": 0, 00:07:27.677 "io_queue_requests": 0, 00:07:27.677 "delay_cmd_submit": true, 00:07:27.677 "transport_retry_count": 4, 00:07:27.677 "bdev_retry_count": 3, 00:07:27.677 "transport_ack_timeout": 0, 00:07:27.677 "ctrlr_loss_timeout_sec": 0, 00:07:27.677 "reconnect_delay_sec": 0, 00:07:27.677 "fast_io_fail_timeout_sec": 0, 00:07:27.677 "disable_auto_failback": false, 00:07:27.677 "generate_uuids": false, 00:07:27.677 "transport_tos": 0, 00:07:27.677 "nvme_error_stat": false, 00:07:27.677 "rdma_srq_size": 0, 00:07:27.677 "io_path_stat": false, 00:07:27.677 "allow_accel_sequence": false, 00:07:27.677 "rdma_max_cq_size": 0, 00:07:27.677 "rdma_cm_event_timeout_ms": 0, 00:07:27.677 "dhchap_digests": [ 00:07:27.677 "sha256", 00:07:27.677 "sha384", 00:07:27.677 "sha512" 00:07:27.677 ], 00:07:27.677 "dhchap_dhgroups": [ 00:07:27.677 "null", 00:07:27.677 "ffdhe2048", 00:07:27.677 "ffdhe3072", 00:07:27.677 "ffdhe4096", 00:07:27.677 "ffdhe6144", 00:07:27.677 "ffdhe8192" 00:07:27.677 ] 00:07:27.677 } 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "method": "bdev_nvme_set_hotplug", 00:07:27.677 "params": { 00:07:27.677 "period_us": 100000, 00:07:27.677 "enable": false 00:07:27.677 } 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "method": "bdev_wait_for_examine" 00:07:27.677 } 00:07:27.677 ] 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "subsystem": "scsi", 00:07:27.677 "config": null 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "subsystem": "scheduler", 00:07:27.677 "config": [ 00:07:27.677 { 00:07:27.677 "method": "framework_set_scheduler", 00:07:27.677 "params": { 00:07:27.677 "name": "static" 00:07:27.677 } 00:07:27.677 } 00:07:27.677 ] 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "subsystem": "vhost_scsi", 00:07:27.677 "config": [] 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "subsystem": "vhost_blk", 00:07:27.677 "config": [] 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "subsystem": "ublk", 00:07:27.677 "config": [] 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "subsystem": "nbd", 00:07:27.677 "config": [] 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "subsystem": "nvmf", 00:07:27.677 "config": [ 00:07:27.677 { 00:07:27.677 "method": "nvmf_set_config", 00:07:27.677 "params": { 00:07:27.677 "discovery_filter": "match_any", 00:07:27.677 "admin_cmd_passthru": { 00:07:27.677 "identify_ctrlr": false 00:07:27.677 }, 00:07:27.677 "dhchap_digests": [ 00:07:27.677 "sha256", 00:07:27.677 "sha384", 00:07:27.677 "sha512" 00:07:27.677 ], 00:07:27.677 "dhchap_dhgroups": [ 00:07:27.677 "null", 00:07:27.677 "ffdhe2048", 00:07:27.677 "ffdhe3072", 00:07:27.677 "ffdhe4096", 00:07:27.677 "ffdhe6144", 00:07:27.677 "ffdhe8192" 00:07:27.677 ] 00:07:27.677 } 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "method": "nvmf_set_max_subsystems", 00:07:27.677 "params": { 00:07:27.677 "max_subsystems": 1024 00:07:27.677 } 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "method": "nvmf_set_crdt", 00:07:27.677 "params": { 00:07:27.677 "crdt1": 0, 00:07:27.677 "crdt2": 0, 00:07:27.677 "crdt3": 0 00:07:27.677 } 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "method": "nvmf_create_transport", 00:07:27.677 "params": { 00:07:27.677 "trtype": "TCP", 00:07:27.677 "max_queue_depth": 128, 00:07:27.677 "max_io_qpairs_per_ctrlr": 127, 00:07:27.677 "in_capsule_data_size": 4096, 00:07:27.677 "max_io_size": 131072, 00:07:27.677 "io_unit_size": 131072, 00:07:27.677 "max_aq_depth": 128, 00:07:27.677 "num_shared_buffers": 511, 00:07:27.677 "buf_cache_size": 4294967295, 00:07:27.677 "dif_insert_or_strip": false, 00:07:27.677 "zcopy": false, 00:07:27.677 "c2h_success": true, 00:07:27.677 "sock_priority": 0, 00:07:27.677 "abort_timeout_sec": 1, 00:07:27.677 "ack_timeout": 0, 00:07:27.677 "data_wr_pool_size": 0 00:07:27.677 } 00:07:27.677 } 00:07:27.677 ] 00:07:27.677 }, 00:07:27.677 { 00:07:27.677 "subsystem": "iscsi", 00:07:27.677 "config": [ 00:07:27.677 { 00:07:27.677 "method": "iscsi_set_options", 00:07:27.677 "params": { 00:07:27.677 "node_base": "iqn.2016-06.io.spdk", 00:07:27.677 "max_sessions": 128, 00:07:27.677 "max_connections_per_session": 2, 00:07:27.677 "max_queue_depth": 64, 00:07:27.677 "default_time2wait": 2, 00:07:27.677 "default_time2retain": 20, 00:07:27.677 "first_burst_length": 8192, 00:07:27.677 "immediate_data": true, 00:07:27.677 "allow_duplicated_isid": false, 00:07:27.677 "error_recovery_level": 0, 00:07:27.677 "nop_timeout": 60, 00:07:27.677 "nop_in_interval": 30, 00:07:27.677 "disable_chap": false, 00:07:27.677 "require_chap": false, 00:07:27.677 "mutual_chap": false, 00:07:27.677 "chap_group": 0, 00:07:27.677 "max_large_datain_per_connection": 64, 00:07:27.677 "max_r2t_per_connection": 4, 00:07:27.677 "pdu_pool_size": 36864, 00:07:27.677 "immediate_data_pool_size": 16384, 00:07:27.677 "data_out_pool_size": 2048 00:07:27.677 } 00:07:27.677 } 00:07:27.677 ] 00:07:27.677 } 00:07:27.677 ] 00:07:27.677 } 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3638578 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3638578 ']' 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3638578 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3638578 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3638578' 00:07:27.677 killing process with pid 3638578 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3638578 00:07:27.677 09:59:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3638578 00:07:27.938 09:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3638813 00:07:27.938 09:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:27.938 09:59:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3638813 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3638813 ']' 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3638813 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3638813 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3638813' 00:07:33.378 killing process with pid 3638813 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3638813 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3638813 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:33.378 00:07:33.378 real 0m6.617s 00:07:33.378 user 0m6.500s 00:07:33.378 sys 0m0.598s 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:33.378 ************************************ 00:07:33.378 END TEST skip_rpc_with_json 00:07:33.378 ************************************ 00:07:33.378 09:59:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:33.378 09:59:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.378 09:59:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.378 09:59:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.378 ************************************ 00:07:33.378 START TEST skip_rpc_with_delay 00:07:33.378 ************************************ 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:33.378 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:33.379 [2024-11-06 09:59:36.715029] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:33.379 00:07:33.379 real 0m0.079s 00:07:33.379 user 0m0.050s 00:07:33.379 sys 0m0.029s 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.379 09:59:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:33.379 ************************************ 00:07:33.379 END TEST skip_rpc_with_delay 00:07:33.379 ************************************ 00:07:33.379 09:59:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:33.379 09:59:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:33.379 09:59:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:33.379 09:59:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.379 09:59:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.379 09:59:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.379 ************************************ 00:07:33.379 START TEST exit_on_failed_rpc_init 00:07:33.379 ************************************ 00:07:33.379 09:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:07:33.379 09:59:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3640103 00:07:33.379 09:59:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:33.379 09:59:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3640103 00:07:33.379 09:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3640103 ']' 00:07:33.379 09:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.379 09:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:33.379 09:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.379 09:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:33.379 09:59:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:33.379 [2024-11-06 09:59:36.872472] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:33.379 [2024-11-06 09:59:36.872530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640103 ] 00:07:33.639 [2024-11-06 09:59:36.954770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.639 [2024-11-06 09:59:36.996806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:34.209 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:34.469 [2024-11-06 09:59:37.733739] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:34.469 [2024-11-06 09:59:37.733793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640228 ] 00:07:34.469 [2024-11-06 09:59:37.827741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.469 [2024-11-06 09:59:37.863588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.469 [2024-11-06 09:59:37.863638] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:34.469 [2024-11-06 09:59:37.863647] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:34.469 [2024-11-06 09:59:37.863655] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3640103 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3640103 ']' 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3640103 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:34.469 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3640103 00:07:34.730 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:34.730 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:34.730 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3640103' 00:07:34.730 killing process with pid 3640103 00:07:34.730 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3640103 00:07:34.730 09:59:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3640103 00:07:34.730 00:07:34.730 real 0m1.362s 00:07:34.730 user 0m1.611s 00:07:34.730 sys 0m0.375s 00:07:34.730 09:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.730 09:59:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:34.730 ************************************ 00:07:34.730 END TEST exit_on_failed_rpc_init 00:07:34.730 ************************************ 00:07:34.730 09:59:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:34.730 00:07:34.730 real 0m13.863s 00:07:34.730 user 0m13.484s 00:07:34.730 sys 0m1.560s 00:07:34.730 09:59:38 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.730 09:59:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.730 ************************************ 00:07:34.730 END TEST skip_rpc 00:07:34.730 ************************************ 00:07:34.991 09:59:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:34.991 09:59:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:34.991 09:59:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.991 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:07:34.991 ************************************ 00:07:34.991 START TEST rpc_client 00:07:34.991 ************************************ 00:07:34.991 09:59:38 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:34.991 * Looking for test storage... 00:07:34.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:34.991 09:59:38 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:34.991 09:59:38 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:07:34.991 09:59:38 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:34.991 09:59:38 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.991 09:59:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:34.992 09:59:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.992 09:59:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:34.992 09:59:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:34.992 09:59:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.992 09:59:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:34.992 09:59:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.254 09:59:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.254 09:59:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.254 09:59:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:35.254 09:59:38 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.254 09:59:38 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:35.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.254 --rc genhtml_branch_coverage=1 00:07:35.254 --rc genhtml_function_coverage=1 00:07:35.254 --rc genhtml_legend=1 00:07:35.254 --rc geninfo_all_blocks=1 00:07:35.254 --rc geninfo_unexecuted_blocks=1 00:07:35.254 00:07:35.254 ' 00:07:35.254 09:59:38 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:35.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.254 --rc genhtml_branch_coverage=1 00:07:35.254 --rc genhtml_function_coverage=1 00:07:35.254 --rc genhtml_legend=1 00:07:35.254 --rc geninfo_all_blocks=1 00:07:35.254 --rc geninfo_unexecuted_blocks=1 00:07:35.254 00:07:35.254 ' 00:07:35.254 09:59:38 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:35.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.254 --rc genhtml_branch_coverage=1 00:07:35.254 --rc genhtml_function_coverage=1 00:07:35.254 --rc genhtml_legend=1 00:07:35.254 --rc geninfo_all_blocks=1 00:07:35.254 --rc geninfo_unexecuted_blocks=1 00:07:35.254 00:07:35.254 ' 00:07:35.254 09:59:38 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:35.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.254 --rc genhtml_branch_coverage=1 00:07:35.254 --rc genhtml_function_coverage=1 00:07:35.254 --rc genhtml_legend=1 00:07:35.254 --rc geninfo_all_blocks=1 00:07:35.254 --rc geninfo_unexecuted_blocks=1 00:07:35.254 00:07:35.254 ' 00:07:35.254 09:59:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:35.254 OK 00:07:35.254 09:59:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:35.254 00:07:35.254 real 0m0.227s 00:07:35.254 user 0m0.125s 00:07:35.254 sys 0m0.116s 00:07:35.254 09:59:38 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.254 09:59:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:35.254 ************************************ 00:07:35.254 END TEST rpc_client 00:07:35.254 ************************************ 00:07:35.254 09:59:38 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:35.254 09:59:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:35.254 09:59:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.254 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:07:35.254 ************************************ 00:07:35.254 START TEST json_config 00:07:35.254 ************************************ 00:07:35.254 09:59:38 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:35.254 09:59:38 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:35.254 09:59:38 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:07:35.254 09:59:38 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:35.254 09:59:38 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:35.254 09:59:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.254 09:59:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.254 09:59:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.254 09:59:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.254 09:59:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.254 09:59:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.254 09:59:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.254 09:59:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.254 09:59:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.254 09:59:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.254 09:59:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.254 09:59:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:35.254 09:59:38 json_config -- scripts/common.sh@345 -- # : 1 00:07:35.254 09:59:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.254 09:59:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.254 09:59:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:35.254 09:59:38 json_config -- scripts/common.sh@353 -- # local d=1 00:07:35.254 09:59:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.254 09:59:38 json_config -- scripts/common.sh@355 -- # echo 1 00:07:35.254 09:59:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.254 09:59:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:35.254 09:59:38 json_config -- scripts/common.sh@353 -- # local d=2 00:07:35.254 09:59:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.254 09:59:38 json_config -- scripts/common.sh@355 -- # echo 2 00:07:35.516 09:59:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.516 09:59:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.516 09:59:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.516 09:59:38 json_config -- scripts/common.sh@368 -- # return 0 00:07:35.516 09:59:38 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.516 09:59:38 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:35.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.516 --rc genhtml_branch_coverage=1 00:07:35.516 --rc genhtml_function_coverage=1 00:07:35.516 --rc genhtml_legend=1 00:07:35.516 --rc geninfo_all_blocks=1 00:07:35.516 --rc geninfo_unexecuted_blocks=1 00:07:35.516 00:07:35.516 ' 00:07:35.516 09:59:38 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:35.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.516 --rc genhtml_branch_coverage=1 00:07:35.516 --rc genhtml_function_coverage=1 00:07:35.516 --rc genhtml_legend=1 00:07:35.516 --rc geninfo_all_blocks=1 00:07:35.516 --rc geninfo_unexecuted_blocks=1 00:07:35.516 00:07:35.516 ' 00:07:35.516 09:59:38 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:35.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.516 --rc genhtml_branch_coverage=1 00:07:35.516 --rc genhtml_function_coverage=1 00:07:35.516 --rc genhtml_legend=1 00:07:35.516 --rc geninfo_all_blocks=1 00:07:35.516 --rc geninfo_unexecuted_blocks=1 00:07:35.516 00:07:35.516 ' 00:07:35.516 09:59:38 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:35.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.516 --rc genhtml_branch_coverage=1 00:07:35.516 --rc genhtml_function_coverage=1 00:07:35.516 --rc genhtml_legend=1 00:07:35.516 --rc geninfo_all_blocks=1 00:07:35.516 --rc geninfo_unexecuted_blocks=1 00:07:35.516 00:07:35.516 ' 00:07:35.516 09:59:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.516 09:59:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:35.516 09:59:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.516 09:59:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.516 09:59:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.516 09:59:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.516 09:59:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.516 09:59:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.516 09:59:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.517 09:59:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.517 09:59:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.517 09:59:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.517 09:59:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.517 09:59:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.517 09:59:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.517 09:59:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.517 09:59:38 json_config -- paths/export.sh@5 -- # export PATH 00:07:35.517 09:59:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@51 -- # : 0 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.517 09:59:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:35.517 INFO: JSON configuration test init 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:35.517 09:59:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.517 09:59:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:35.517 09:59:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.517 09:59:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.517 09:59:38 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:35.517 09:59:38 json_config -- json_config/common.sh@9 -- # local app=target 00:07:35.517 09:59:38 json_config -- json_config/common.sh@10 -- # shift 00:07:35.517 09:59:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:35.517 09:59:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:35.517 09:59:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:35.517 09:59:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:35.517 09:59:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:35.517 09:59:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3640682 00:07:35.517 09:59:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:35.517 Waiting for target to run... 00:07:35.517 09:59:38 json_config -- json_config/common.sh@25 -- # waitforlisten 3640682 /var/tmp/spdk_tgt.sock 00:07:35.517 09:59:38 json_config -- common/autotest_common.sh@833 -- # '[' -z 3640682 ']' 00:07:35.517 09:59:38 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:35.517 09:59:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:35.517 09:59:38 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:35.517 09:59:38 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:35.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:35.517 09:59:38 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:35.517 09:59:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.517 [2024-11-06 09:59:38.871952] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:35.517 [2024-11-06 09:59:38.872030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640682 ] 00:07:35.778 [2024-11-06 09:59:39.154937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.778 [2024-11-06 09:59:39.185008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.349 09:59:39 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:36.349 09:59:39 json_config -- common/autotest_common.sh@866 -- # return 0 00:07:36.349 09:59:39 json_config -- json_config/common.sh@26 -- # echo '' 00:07:36.349 00:07:36.349 09:59:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:36.349 09:59:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:36.349 09:59:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.349 09:59:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.349 09:59:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:36.349 09:59:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:36.349 09:59:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.349 09:59:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.349 09:59:39 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:36.349 09:59:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:36.349 09:59:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:36.921 09:59:40 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:36.921 09:59:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:36.921 09:59:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.921 09:59:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.921 09:59:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:36.922 09:59:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:36.922 09:59:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:36.922 09:59:40 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:36.922 09:59:40 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:36.922 09:59:40 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:36.922 09:59:40 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:36.922 09:59:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@54 -- # sort 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:37.183 09:59:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.183 09:59:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:37.183 09:59:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.183 09:59:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:37.183 09:59:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:37.183 09:59:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:37.183 MallocForNvmf0 00:07:37.443 09:59:40 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:37.443 09:59:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:37.443 MallocForNvmf1 00:07:37.443 09:59:40 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:37.443 09:59:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:37.703 [2024-11-06 09:59:41.038847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.703 09:59:41 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.703 09:59:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.963 09:59:41 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:37.963 09:59:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:37.963 09:59:41 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:37.963 09:59:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:38.223 09:59:41 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:38.223 09:59:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:38.484 [2024-11-06 09:59:41.725092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:38.484 09:59:41 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:38.484 09:59:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.484 09:59:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.484 09:59:41 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:38.484 09:59:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.484 09:59:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.484 09:59:41 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:38.484 09:59:41 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:38.484 09:59:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:38.744 MallocBdevForConfigChangeCheck 00:07:38.744 09:59:41 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:38.744 09:59:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.744 09:59:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.744 09:59:42 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:38.744 09:59:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:39.004 09:59:42 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:39.004 INFO: shutting down applications... 00:07:39.004 09:59:42 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:39.004 09:59:42 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:39.004 09:59:42 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:39.004 09:59:42 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:39.264 Calling clear_iscsi_subsystem 00:07:39.264 Calling clear_nvmf_subsystem 00:07:39.264 Calling clear_nbd_subsystem 00:07:39.264 Calling clear_ublk_subsystem 00:07:39.264 Calling clear_vhost_blk_subsystem 00:07:39.264 Calling clear_vhost_scsi_subsystem 00:07:39.264 Calling clear_bdev_subsystem 00:07:39.524 09:59:42 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:39.524 09:59:42 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:39.524 09:59:42 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:39.524 09:59:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:39.524 09:59:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:39.524 09:59:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:39.784 09:59:43 json_config -- json_config/json_config.sh@352 -- # break 00:07:39.784 09:59:43 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:39.784 09:59:43 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:39.784 09:59:43 json_config -- json_config/common.sh@31 -- # local app=target 00:07:39.784 09:59:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:39.784 09:59:43 json_config -- json_config/common.sh@35 -- # [[ -n 3640682 ]] 00:07:39.784 09:59:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3640682 00:07:39.784 09:59:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:39.784 09:59:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:39.784 09:59:43 json_config -- json_config/common.sh@41 -- # kill -0 3640682 00:07:39.784 09:59:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:40.355 09:59:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:40.355 09:59:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:40.355 09:59:43 json_config -- json_config/common.sh@41 -- # kill -0 3640682 00:07:40.355 09:59:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:40.355 09:59:43 json_config -- json_config/common.sh@43 -- # break 00:07:40.355 09:59:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:40.355 09:59:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:40.355 SPDK target shutdown done 00:07:40.355 09:59:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:40.355 INFO: relaunching applications... 00:07:40.355 09:59:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:40.355 09:59:43 json_config -- json_config/common.sh@9 -- # local app=target 00:07:40.355 09:59:43 json_config -- json_config/common.sh@10 -- # shift 00:07:40.355 09:59:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:40.355 09:59:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:40.355 09:59:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:40.355 09:59:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:40.355 09:59:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:40.355 09:59:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3641730 00:07:40.355 09:59:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:40.355 Waiting for target to run... 00:07:40.355 09:59:43 json_config -- json_config/common.sh@25 -- # waitforlisten 3641730 /var/tmp/spdk_tgt.sock 00:07:40.355 09:59:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:40.355 09:59:43 json_config -- common/autotest_common.sh@833 -- # '[' -z 3641730 ']' 00:07:40.355 09:59:43 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:40.355 09:59:43 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:40.355 09:59:43 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:40.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:40.355 09:59:43 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:40.355 09:59:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:40.355 [2024-11-06 09:59:43.675901] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:40.355 [2024-11-06 09:59:43.675974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641730 ] 00:07:40.616 [2024-11-06 09:59:44.011269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.616 [2024-11-06 09:59:44.040738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.186 [2024-11-06 09:59:44.564060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.186 [2024-11-06 09:59:44.596452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:41.186 09:59:44 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:41.186 09:59:44 json_config -- common/autotest_common.sh@866 -- # return 0 00:07:41.186 09:59:44 json_config -- json_config/common.sh@26 -- # echo '' 00:07:41.186 00:07:41.186 09:59:44 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:41.186 09:59:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:41.186 INFO: Checking if target configuration is the same... 00:07:41.186 09:59:44 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:41.186 09:59:44 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:41.186 09:59:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:41.186 + '[' 2 -ne 2 ']' 00:07:41.186 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:41.186 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:41.186 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.186 +++ basename /dev/fd/62 00:07:41.186 ++ mktemp /tmp/62.XXX 00:07:41.186 + tmp_file_1=/tmp/62.mCs 00:07:41.186 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:41.186 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:41.186 + tmp_file_2=/tmp/spdk_tgt_config.json.I44 00:07:41.186 + ret=0 00:07:41.186 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:41.758 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:41.758 + diff -u /tmp/62.mCs /tmp/spdk_tgt_config.json.I44 00:07:41.758 + echo 'INFO: JSON config files are the same' 00:07:41.758 INFO: JSON config files are the same 00:07:41.758 + rm /tmp/62.mCs /tmp/spdk_tgt_config.json.I44 00:07:41.758 + exit 0 00:07:41.758 09:59:45 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:41.758 09:59:45 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:41.758 INFO: changing configuration and checking if this can be detected... 00:07:41.758 09:59:45 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:41.758 09:59:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:41.758 09:59:45 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:41.758 09:59:45 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:41.758 09:59:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:41.758 + '[' 2 -ne 2 ']' 00:07:41.758 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:41.758 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:41.758 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.758 +++ basename /dev/fd/62 00:07:41.758 ++ mktemp /tmp/62.XXX 00:07:41.758 + tmp_file_1=/tmp/62.Sw8 00:07:41.758 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:41.758 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:41.758 + tmp_file_2=/tmp/spdk_tgt_config.json.kyH 00:07:41.758 + ret=0 00:07:41.758 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:42.330 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:42.330 + diff -u /tmp/62.Sw8 /tmp/spdk_tgt_config.json.kyH 00:07:42.330 + ret=1 00:07:42.330 + echo '=== Start of file: /tmp/62.Sw8 ===' 00:07:42.330 + cat /tmp/62.Sw8 00:07:42.330 + echo '=== End of file: /tmp/62.Sw8 ===' 00:07:42.330 + echo '' 00:07:42.330 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kyH ===' 00:07:42.330 + cat /tmp/spdk_tgt_config.json.kyH 00:07:42.330 + echo '=== End of file: /tmp/spdk_tgt_config.json.kyH ===' 00:07:42.330 + echo '' 00:07:42.331 + rm /tmp/62.Sw8 /tmp/spdk_tgt_config.json.kyH 00:07:42.331 + exit 1 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:42.331 INFO: configuration change detected. 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@324 -- # [[ -n 3641730 ]] 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:42.331 09:59:45 json_config -- json_config/json_config.sh@330 -- # killprocess 3641730 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@952 -- # '[' -z 3641730 ']' 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@956 -- # kill -0 3641730 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@957 -- # uname 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3641730 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3641730' 00:07:42.331 killing process with pid 3641730 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@971 -- # kill 3641730 00:07:42.331 09:59:45 json_config -- common/autotest_common.sh@976 -- # wait 3641730 00:07:42.592 09:59:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:42.592 09:59:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:42.592 09:59:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.592 09:59:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:42.592 09:59:46 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:42.592 09:59:46 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:42.592 INFO: Success 00:07:42.592 00:07:42.592 real 0m7.439s 00:07:42.592 user 0m8.983s 00:07:42.592 sys 0m1.974s 00:07:42.592 09:59:46 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.592 09:59:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:42.592 ************************************ 00:07:42.592 END TEST json_config 00:07:42.592 ************************************ 00:07:42.592 09:59:46 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:42.592 09:59:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:42.592 09:59:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:42.592 09:59:46 -- common/autotest_common.sh@10 -- # set +x 00:07:42.854 ************************************ 00:07:42.854 START TEST json_config_extra_key 00:07:42.854 ************************************ 00:07:42.854 09:59:46 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:42.854 09:59:46 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:42.854 09:59:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:42.854 09:59:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:07:42.854 09:59:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.854 09:59:46 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:42.854 09:59:46 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.854 09:59:46 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:42.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.854 --rc genhtml_branch_coverage=1 00:07:42.854 --rc genhtml_function_coverage=1 00:07:42.854 --rc genhtml_legend=1 00:07:42.854 --rc geninfo_all_blocks=1 00:07:42.854 --rc geninfo_unexecuted_blocks=1 00:07:42.854 00:07:42.854 ' 00:07:42.854 09:59:46 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:42.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.854 --rc genhtml_branch_coverage=1 00:07:42.854 --rc genhtml_function_coverage=1 00:07:42.854 --rc genhtml_legend=1 00:07:42.854 --rc geninfo_all_blocks=1 00:07:42.854 --rc geninfo_unexecuted_blocks=1 00:07:42.854 00:07:42.854 ' 00:07:42.854 09:59:46 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:42.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.855 --rc genhtml_branch_coverage=1 00:07:42.855 --rc genhtml_function_coverage=1 00:07:42.855 --rc genhtml_legend=1 00:07:42.855 --rc geninfo_all_blocks=1 00:07:42.855 --rc geninfo_unexecuted_blocks=1 00:07:42.855 00:07:42.855 ' 00:07:42.855 09:59:46 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:42.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.855 --rc genhtml_branch_coverage=1 00:07:42.855 --rc genhtml_function_coverage=1 00:07:42.855 --rc genhtml_legend=1 00:07:42.855 --rc geninfo_all_blocks=1 00:07:42.855 --rc geninfo_unexecuted_blocks=1 00:07:42.855 00:07:42.855 ' 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.855 09:59:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.855 09:59:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.855 09:59:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.855 09:59:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.855 09:59:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.855 09:59:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.855 09:59:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.855 09:59:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:42.855 09:59:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.855 09:59:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:42.855 INFO: launching applications... 00:07:42.855 09:59:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:42.855 09:59:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:42.855 09:59:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:42.855 09:59:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:42.855 09:59:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:42.855 09:59:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:42.855 09:59:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:42.855 09:59:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:42.855 09:59:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3642289 00:07:42.855 09:59:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:42.855 Waiting for target to run... 00:07:42.855 09:59:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3642289 /var/tmp/spdk_tgt.sock 00:07:42.855 09:59:46 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3642289 ']' 00:07:42.855 09:59:46 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:42.855 09:59:46 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.855 09:59:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:42.855 09:59:46 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:42.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:42.855 09:59:46 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.855 09:59:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:43.116 [2024-11-06 09:59:46.383101] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:43.116 [2024-11-06 09:59:46.383158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642289 ] 00:07:43.376 [2024-11-06 09:59:46.719155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.376 [2024-11-06 09:59:46.749217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.947 09:59:47 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.947 09:59:47 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:07:43.947 09:59:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:43.947 00:07:43.947 09:59:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:43.947 INFO: shutting down applications... 00:07:43.947 09:59:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:43.947 09:59:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:43.947 09:59:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:43.947 09:59:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3642289 ]] 00:07:43.947 09:59:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3642289 00:07:43.947 09:59:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:43.947 09:59:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:43.947 09:59:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3642289 00:07:43.947 09:59:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:44.207 09:59:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:44.207 09:59:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:44.207 09:59:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3642289 00:07:44.207 09:59:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:44.207 09:59:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:44.207 09:59:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:44.207 09:59:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:44.207 SPDK target shutdown done 00:07:44.207 09:59:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:44.207 Success 00:07:44.207 00:07:44.207 real 0m1.569s 00:07:44.207 user 0m1.169s 00:07:44.207 sys 0m0.443s 00:07:44.207 09:59:47 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:44.207 09:59:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:44.207 ************************************ 00:07:44.207 END TEST json_config_extra_key 00:07:44.207 ************************************ 00:07:44.470 09:59:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:44.470 09:59:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:44.470 09:59:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:44.470 09:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.470 ************************************ 00:07:44.470 START TEST alias_rpc 00:07:44.470 ************************************ 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:44.470 * Looking for test storage... 00:07:44.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.470 09:59:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.470 --rc genhtml_branch_coverage=1 00:07:44.470 --rc genhtml_function_coverage=1 00:07:44.470 --rc genhtml_legend=1 00:07:44.470 --rc geninfo_all_blocks=1 00:07:44.470 --rc geninfo_unexecuted_blocks=1 00:07:44.470 00:07:44.470 ' 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.470 --rc genhtml_branch_coverage=1 00:07:44.470 --rc genhtml_function_coverage=1 00:07:44.470 --rc genhtml_legend=1 00:07:44.470 --rc geninfo_all_blocks=1 00:07:44.470 --rc geninfo_unexecuted_blocks=1 00:07:44.470 00:07:44.470 ' 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.470 --rc genhtml_branch_coverage=1 00:07:44.470 --rc genhtml_function_coverage=1 00:07:44.470 --rc genhtml_legend=1 00:07:44.470 --rc geninfo_all_blocks=1 00:07:44.470 --rc geninfo_unexecuted_blocks=1 00:07:44.470 00:07:44.470 ' 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.470 --rc genhtml_branch_coverage=1 00:07:44.470 --rc genhtml_function_coverage=1 00:07:44.470 --rc genhtml_legend=1 00:07:44.470 --rc geninfo_all_blocks=1 00:07:44.470 --rc geninfo_unexecuted_blocks=1 00:07:44.470 00:07:44.470 ' 00:07:44.470 09:59:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:44.470 09:59:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3642680 00:07:44.470 09:59:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3642680 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3642680 ']' 00:07:44.470 09:59:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:44.470 09:59:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.732 [2024-11-06 09:59:48.000630] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:44.732 [2024-11-06 09:59:48.000685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642680 ] 00:07:44.732 [2024-11-06 09:59:48.079345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.732 [2024-11-06 09:59:48.115651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.305 09:59:48 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:45.305 09:59:48 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:45.305 09:59:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:45.565 09:59:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3642680 00:07:45.565 09:59:48 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3642680 ']' 00:07:45.565 09:59:49 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3642680 00:07:45.565 09:59:49 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:07:45.565 09:59:49 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:45.565 09:59:49 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3642680 00:07:45.826 09:59:49 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:45.826 09:59:49 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:45.826 09:59:49 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3642680' 00:07:45.826 killing process with pid 3642680 00:07:45.826 09:59:49 alias_rpc -- common/autotest_common.sh@971 -- # kill 3642680 00:07:45.826 09:59:49 alias_rpc -- common/autotest_common.sh@976 -- # wait 3642680 00:07:45.826 00:07:45.826 real 0m1.525s 00:07:45.826 user 0m1.669s 00:07:45.826 sys 0m0.429s 00:07:45.826 09:59:49 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.826 09:59:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.826 ************************************ 00:07:45.826 END TEST alias_rpc 00:07:45.826 ************************************ 00:07:45.826 09:59:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:45.826 09:59:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:45.826 09:59:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:45.826 09:59:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.826 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.087 ************************************ 00:07:46.087 START TEST spdkcli_tcp 00:07:46.087 ************************************ 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:46.087 * Looking for test storage... 00:07:46.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.087 09:59:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:46.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.087 --rc genhtml_branch_coverage=1 00:07:46.087 --rc genhtml_function_coverage=1 00:07:46.087 --rc genhtml_legend=1 00:07:46.087 --rc geninfo_all_blocks=1 00:07:46.087 --rc geninfo_unexecuted_blocks=1 00:07:46.087 00:07:46.087 ' 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:46.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.087 --rc genhtml_branch_coverage=1 00:07:46.087 --rc genhtml_function_coverage=1 00:07:46.087 --rc genhtml_legend=1 00:07:46.087 --rc geninfo_all_blocks=1 00:07:46.087 --rc geninfo_unexecuted_blocks=1 00:07:46.087 00:07:46.087 ' 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:46.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.087 --rc genhtml_branch_coverage=1 00:07:46.087 --rc genhtml_function_coverage=1 00:07:46.087 --rc genhtml_legend=1 00:07:46.087 --rc geninfo_all_blocks=1 00:07:46.087 --rc geninfo_unexecuted_blocks=1 00:07:46.087 00:07:46.087 ' 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:46.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.087 --rc genhtml_branch_coverage=1 00:07:46.087 --rc genhtml_function_coverage=1 00:07:46.087 --rc genhtml_legend=1 00:07:46.087 --rc geninfo_all_blocks=1 00:07:46.087 --rc geninfo_unexecuted_blocks=1 00:07:46.087 00:07:46.087 ' 00:07:46.087 09:59:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:46.087 09:59:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:46.087 09:59:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:46.087 09:59:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:46.087 09:59:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:46.087 09:59:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:46.087 09:59:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.087 09:59:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3643081 00:07:46.087 09:59:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3643081 00:07:46.087 09:59:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3643081 ']' 00:07:46.087 09:59:49 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.088 09:59:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:46.088 09:59:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.088 09:59:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:46.088 09:59:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.355 [2024-11-06 09:59:49.622547] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:46.355 [2024-11-06 09:59:49.622619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643081 ] 00:07:46.355 [2024-11-06 09:59:49.705242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:46.355 [2024-11-06 09:59:49.748507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.355 [2024-11-06 09:59:49.748510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.927 09:59:50 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:46.927 09:59:50 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:07:46.927 09:59:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3643387 00:07:46.927 09:59:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:46.927 09:59:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:47.188 [ 00:07:47.188 "bdev_malloc_delete", 00:07:47.188 "bdev_malloc_create", 00:07:47.188 "bdev_null_resize", 00:07:47.188 "bdev_null_delete", 00:07:47.188 "bdev_null_create", 00:07:47.188 "bdev_nvme_cuse_unregister", 00:07:47.188 "bdev_nvme_cuse_register", 00:07:47.188 "bdev_opal_new_user", 00:07:47.188 "bdev_opal_set_lock_state", 00:07:47.188 "bdev_opal_delete", 00:07:47.188 "bdev_opal_get_info", 00:07:47.188 "bdev_opal_create", 00:07:47.188 "bdev_nvme_opal_revert", 00:07:47.188 "bdev_nvme_opal_init", 00:07:47.188 "bdev_nvme_send_cmd", 00:07:47.188 "bdev_nvme_set_keys", 00:07:47.188 "bdev_nvme_get_path_iostat", 00:07:47.188 "bdev_nvme_get_mdns_discovery_info", 00:07:47.188 "bdev_nvme_stop_mdns_discovery", 00:07:47.188 "bdev_nvme_start_mdns_discovery", 00:07:47.188 "bdev_nvme_set_multipath_policy", 00:07:47.188 "bdev_nvme_set_preferred_path", 00:07:47.188 "bdev_nvme_get_io_paths", 00:07:47.188 "bdev_nvme_remove_error_injection", 00:07:47.188 "bdev_nvme_add_error_injection", 00:07:47.188 "bdev_nvme_get_discovery_info", 00:07:47.188 "bdev_nvme_stop_discovery", 00:07:47.188 "bdev_nvme_start_discovery", 00:07:47.188 "bdev_nvme_get_controller_health_info", 00:07:47.188 "bdev_nvme_disable_controller", 00:07:47.188 "bdev_nvme_enable_controller", 00:07:47.188 "bdev_nvme_reset_controller", 00:07:47.188 "bdev_nvme_get_transport_statistics", 00:07:47.188 "bdev_nvme_apply_firmware", 00:07:47.188 "bdev_nvme_detach_controller", 00:07:47.188 "bdev_nvme_get_controllers", 00:07:47.188 "bdev_nvme_attach_controller", 00:07:47.188 "bdev_nvme_set_hotplug", 00:07:47.188 "bdev_nvme_set_options", 00:07:47.188 "bdev_passthru_delete", 00:07:47.188 "bdev_passthru_create", 00:07:47.188 "bdev_lvol_set_parent_bdev", 00:07:47.188 "bdev_lvol_set_parent", 00:07:47.188 "bdev_lvol_check_shallow_copy", 00:07:47.188 "bdev_lvol_start_shallow_copy", 00:07:47.188 "bdev_lvol_grow_lvstore", 00:07:47.189 "bdev_lvol_get_lvols", 00:07:47.189 "bdev_lvol_get_lvstores", 00:07:47.189 "bdev_lvol_delete", 00:07:47.189 "bdev_lvol_set_read_only", 00:07:47.189 "bdev_lvol_resize", 00:07:47.189 "bdev_lvol_decouple_parent", 00:07:47.189 "bdev_lvol_inflate", 00:07:47.189 "bdev_lvol_rename", 00:07:47.189 "bdev_lvol_clone_bdev", 00:07:47.189 "bdev_lvol_clone", 00:07:47.189 "bdev_lvol_snapshot", 00:07:47.189 "bdev_lvol_create", 00:07:47.189 "bdev_lvol_delete_lvstore", 00:07:47.189 "bdev_lvol_rename_lvstore", 00:07:47.189 "bdev_lvol_create_lvstore", 00:07:47.189 "bdev_raid_set_options", 00:07:47.189 "bdev_raid_remove_base_bdev", 00:07:47.189 "bdev_raid_add_base_bdev", 00:07:47.189 "bdev_raid_delete", 00:07:47.189 "bdev_raid_create", 00:07:47.189 "bdev_raid_get_bdevs", 00:07:47.189 "bdev_error_inject_error", 00:07:47.189 "bdev_error_delete", 00:07:47.189 "bdev_error_create", 00:07:47.189 "bdev_split_delete", 00:07:47.189 "bdev_split_create", 00:07:47.189 "bdev_delay_delete", 00:07:47.189 "bdev_delay_create", 00:07:47.189 "bdev_delay_update_latency", 00:07:47.189 "bdev_zone_block_delete", 00:07:47.189 "bdev_zone_block_create", 00:07:47.189 "blobfs_create", 00:07:47.189 "blobfs_detect", 00:07:47.189 "blobfs_set_cache_size", 00:07:47.189 "bdev_aio_delete", 00:07:47.189 "bdev_aio_rescan", 00:07:47.189 "bdev_aio_create", 00:07:47.189 "bdev_ftl_set_property", 00:07:47.189 "bdev_ftl_get_properties", 00:07:47.189 "bdev_ftl_get_stats", 00:07:47.189 "bdev_ftl_unmap", 00:07:47.189 "bdev_ftl_unload", 00:07:47.189 "bdev_ftl_delete", 00:07:47.189 "bdev_ftl_load", 00:07:47.189 "bdev_ftl_create", 00:07:47.189 "bdev_virtio_attach_controller", 00:07:47.189 "bdev_virtio_scsi_get_devices", 00:07:47.189 "bdev_virtio_detach_controller", 00:07:47.189 "bdev_virtio_blk_set_hotplug", 00:07:47.189 "bdev_iscsi_delete", 00:07:47.189 "bdev_iscsi_create", 00:07:47.189 "bdev_iscsi_set_options", 00:07:47.189 "accel_error_inject_error", 00:07:47.189 "ioat_scan_accel_module", 00:07:47.189 "dsa_scan_accel_module", 00:07:47.189 "iaa_scan_accel_module", 00:07:47.189 "vfu_virtio_create_fs_endpoint", 00:07:47.189 "vfu_virtio_create_scsi_endpoint", 00:07:47.189 "vfu_virtio_scsi_remove_target", 00:07:47.189 "vfu_virtio_scsi_add_target", 00:07:47.189 "vfu_virtio_create_blk_endpoint", 00:07:47.189 "vfu_virtio_delete_endpoint", 00:07:47.189 "keyring_file_remove_key", 00:07:47.189 "keyring_file_add_key", 00:07:47.189 "keyring_linux_set_options", 00:07:47.189 "fsdev_aio_delete", 00:07:47.189 "fsdev_aio_create", 00:07:47.189 "iscsi_get_histogram", 00:07:47.189 "iscsi_enable_histogram", 00:07:47.189 "iscsi_set_options", 00:07:47.189 "iscsi_get_auth_groups", 00:07:47.189 "iscsi_auth_group_remove_secret", 00:07:47.189 "iscsi_auth_group_add_secret", 00:07:47.189 "iscsi_delete_auth_group", 00:07:47.189 "iscsi_create_auth_group", 00:07:47.189 "iscsi_set_discovery_auth", 00:07:47.189 "iscsi_get_options", 00:07:47.189 "iscsi_target_node_request_logout", 00:07:47.189 "iscsi_target_node_set_redirect", 00:07:47.189 "iscsi_target_node_set_auth", 00:07:47.189 "iscsi_target_node_add_lun", 00:07:47.189 "iscsi_get_stats", 00:07:47.189 "iscsi_get_connections", 00:07:47.189 "iscsi_portal_group_set_auth", 00:07:47.189 "iscsi_start_portal_group", 00:07:47.189 "iscsi_delete_portal_group", 00:07:47.189 "iscsi_create_portal_group", 00:07:47.189 "iscsi_get_portal_groups", 00:07:47.189 "iscsi_delete_target_node", 00:07:47.189 "iscsi_target_node_remove_pg_ig_maps", 00:07:47.189 "iscsi_target_node_add_pg_ig_maps", 00:07:47.189 "iscsi_create_target_node", 00:07:47.189 "iscsi_get_target_nodes", 00:07:47.189 "iscsi_delete_initiator_group", 00:07:47.189 "iscsi_initiator_group_remove_initiators", 00:07:47.189 "iscsi_initiator_group_add_initiators", 00:07:47.189 "iscsi_create_initiator_group", 00:07:47.189 "iscsi_get_initiator_groups", 00:07:47.189 "nvmf_set_crdt", 00:07:47.189 "nvmf_set_config", 00:07:47.189 "nvmf_set_max_subsystems", 00:07:47.189 "nvmf_stop_mdns_prr", 00:07:47.189 "nvmf_publish_mdns_prr", 00:07:47.189 "nvmf_subsystem_get_listeners", 00:07:47.189 "nvmf_subsystem_get_qpairs", 00:07:47.189 "nvmf_subsystem_get_controllers", 00:07:47.189 "nvmf_get_stats", 00:07:47.189 "nvmf_get_transports", 00:07:47.189 "nvmf_create_transport", 00:07:47.189 "nvmf_get_targets", 00:07:47.189 "nvmf_delete_target", 00:07:47.189 "nvmf_create_target", 00:07:47.189 "nvmf_subsystem_allow_any_host", 00:07:47.189 "nvmf_subsystem_set_keys", 00:07:47.189 "nvmf_subsystem_remove_host", 00:07:47.189 "nvmf_subsystem_add_host", 00:07:47.189 "nvmf_ns_remove_host", 00:07:47.189 "nvmf_ns_add_host", 00:07:47.189 "nvmf_subsystem_remove_ns", 00:07:47.189 "nvmf_subsystem_set_ns_ana_group", 00:07:47.189 "nvmf_subsystem_add_ns", 00:07:47.189 "nvmf_subsystem_listener_set_ana_state", 00:07:47.189 "nvmf_discovery_get_referrals", 00:07:47.189 "nvmf_discovery_remove_referral", 00:07:47.189 "nvmf_discovery_add_referral", 00:07:47.189 "nvmf_subsystem_remove_listener", 00:07:47.189 "nvmf_subsystem_add_listener", 00:07:47.189 "nvmf_delete_subsystem", 00:07:47.189 "nvmf_create_subsystem", 00:07:47.189 "nvmf_get_subsystems", 00:07:47.189 "env_dpdk_get_mem_stats", 00:07:47.189 "nbd_get_disks", 00:07:47.189 "nbd_stop_disk", 00:07:47.189 "nbd_start_disk", 00:07:47.189 "ublk_recover_disk", 00:07:47.189 "ublk_get_disks", 00:07:47.189 "ublk_stop_disk", 00:07:47.189 "ublk_start_disk", 00:07:47.189 "ublk_destroy_target", 00:07:47.189 "ublk_create_target", 00:07:47.189 "virtio_blk_create_transport", 00:07:47.189 "virtio_blk_get_transports", 00:07:47.189 "vhost_controller_set_coalescing", 00:07:47.189 "vhost_get_controllers", 00:07:47.189 "vhost_delete_controller", 00:07:47.189 "vhost_create_blk_controller", 00:07:47.189 "vhost_scsi_controller_remove_target", 00:07:47.189 "vhost_scsi_controller_add_target", 00:07:47.189 "vhost_start_scsi_controller", 00:07:47.189 "vhost_create_scsi_controller", 00:07:47.189 "thread_set_cpumask", 00:07:47.189 "scheduler_set_options", 00:07:47.189 "framework_get_governor", 00:07:47.189 "framework_get_scheduler", 00:07:47.189 "framework_set_scheduler", 00:07:47.189 "framework_get_reactors", 00:07:47.189 "thread_get_io_channels", 00:07:47.189 "thread_get_pollers", 00:07:47.189 "thread_get_stats", 00:07:47.189 "framework_monitor_context_switch", 00:07:47.189 "spdk_kill_instance", 00:07:47.189 "log_enable_timestamps", 00:07:47.189 "log_get_flags", 00:07:47.189 "log_clear_flag", 00:07:47.189 "log_set_flag", 00:07:47.189 "log_get_level", 00:07:47.189 "log_set_level", 00:07:47.189 "log_get_print_level", 00:07:47.189 "log_set_print_level", 00:07:47.189 "framework_enable_cpumask_locks", 00:07:47.189 "framework_disable_cpumask_locks", 00:07:47.189 "framework_wait_init", 00:07:47.189 "framework_start_init", 00:07:47.189 "scsi_get_devices", 00:07:47.189 "bdev_get_histogram", 00:07:47.189 "bdev_enable_histogram", 00:07:47.189 "bdev_set_qos_limit", 00:07:47.189 "bdev_set_qd_sampling_period", 00:07:47.189 "bdev_get_bdevs", 00:07:47.189 "bdev_reset_iostat", 00:07:47.189 "bdev_get_iostat", 00:07:47.189 "bdev_examine", 00:07:47.189 "bdev_wait_for_examine", 00:07:47.189 "bdev_set_options", 00:07:47.189 "accel_get_stats", 00:07:47.189 "accel_set_options", 00:07:47.189 "accel_set_driver", 00:07:47.189 "accel_crypto_key_destroy", 00:07:47.189 "accel_crypto_keys_get", 00:07:47.189 "accel_crypto_key_create", 00:07:47.189 "accel_assign_opc", 00:07:47.189 "accel_get_module_info", 00:07:47.189 "accel_get_opc_assignments", 00:07:47.189 "vmd_rescan", 00:07:47.189 "vmd_remove_device", 00:07:47.189 "vmd_enable", 00:07:47.189 "sock_get_default_impl", 00:07:47.189 "sock_set_default_impl", 00:07:47.189 "sock_impl_set_options", 00:07:47.189 "sock_impl_get_options", 00:07:47.189 "iobuf_get_stats", 00:07:47.189 "iobuf_set_options", 00:07:47.189 "keyring_get_keys", 00:07:47.189 "vfu_tgt_set_base_path", 00:07:47.189 "framework_get_pci_devices", 00:07:47.189 "framework_get_config", 00:07:47.189 "framework_get_subsystems", 00:07:47.189 "fsdev_set_opts", 00:07:47.189 "fsdev_get_opts", 00:07:47.189 "trace_get_info", 00:07:47.189 "trace_get_tpoint_group_mask", 00:07:47.189 "trace_disable_tpoint_group", 00:07:47.189 "trace_enable_tpoint_group", 00:07:47.189 "trace_clear_tpoint_mask", 00:07:47.189 "trace_set_tpoint_mask", 00:07:47.189 "notify_get_notifications", 00:07:47.189 "notify_get_types", 00:07:47.189 "spdk_get_version", 00:07:47.189 "rpc_get_methods" 00:07:47.189 ] 00:07:47.189 09:59:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:47.189 09:59:50 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.189 09:59:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.189 09:59:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:47.189 09:59:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3643081 00:07:47.189 09:59:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3643081 ']' 00:07:47.189 09:59:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3643081 00:07:47.189 09:59:50 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:07:47.189 09:59:50 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:47.190 09:59:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3643081 00:07:47.190 09:59:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:47.190 09:59:50 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:47.190 09:59:50 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3643081' 00:07:47.190 killing process with pid 3643081 00:07:47.190 09:59:50 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3643081 00:07:47.190 09:59:50 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3643081 00:07:47.449 00:07:47.449 real 0m1.526s 00:07:47.449 user 0m2.746s 00:07:47.449 sys 0m0.449s 00:07:47.449 09:59:50 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.449 09:59:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.449 ************************************ 00:07:47.449 END TEST spdkcli_tcp 00:07:47.449 ************************************ 00:07:47.449 09:59:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:47.449 09:59:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:47.449 09:59:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.450 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:47.450 ************************************ 00:07:47.450 START TEST dpdk_mem_utility 00:07:47.450 ************************************ 00:07:47.450 09:59:50 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:47.710 * Looking for test storage... 00:07:47.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:47.710 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:47.710 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:07:47.710 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:47.710 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.710 09:59:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:47.710 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.710 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:47.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.710 --rc genhtml_branch_coverage=1 00:07:47.710 --rc genhtml_function_coverage=1 00:07:47.710 --rc genhtml_legend=1 00:07:47.710 --rc geninfo_all_blocks=1 00:07:47.710 --rc geninfo_unexecuted_blocks=1 00:07:47.710 00:07:47.710 ' 00:07:47.710 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:47.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.710 --rc genhtml_branch_coverage=1 00:07:47.710 --rc genhtml_function_coverage=1 00:07:47.710 --rc genhtml_legend=1 00:07:47.710 --rc geninfo_all_blocks=1 00:07:47.711 --rc geninfo_unexecuted_blocks=1 00:07:47.711 00:07:47.711 ' 00:07:47.711 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:47.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.711 --rc genhtml_branch_coverage=1 00:07:47.711 --rc genhtml_function_coverage=1 00:07:47.711 --rc genhtml_legend=1 00:07:47.711 --rc geninfo_all_blocks=1 00:07:47.711 --rc geninfo_unexecuted_blocks=1 00:07:47.711 00:07:47.711 ' 00:07:47.711 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:47.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.711 --rc genhtml_branch_coverage=1 00:07:47.711 --rc genhtml_function_coverage=1 00:07:47.711 --rc genhtml_legend=1 00:07:47.711 --rc geninfo_all_blocks=1 00:07:47.711 --rc geninfo_unexecuted_blocks=1 00:07:47.711 00:07:47.711 ' 00:07:47.711 09:59:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:47.711 09:59:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3643494 00:07:47.711 09:59:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3643494 00:07:47.711 09:59:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:47.711 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3643494 ']' 00:07:47.711 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.711 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:47.711 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.711 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:47.711 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:47.711 [2024-11-06 09:59:51.202539] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:47.711 [2024-11-06 09:59:51.202595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643494 ] 00:07:47.970 [2024-11-06 09:59:51.281228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.970 [2024-11-06 09:59:51.317177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.540 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:48.540 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:07:48.540 09:59:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:48.540 09:59:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:48.540 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.540 09:59:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:48.540 { 00:07:48.540 "filename": "/tmp/spdk_mem_dump.txt" 00:07:48.540 } 00:07:48.540 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.540 09:59:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:48.800 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:48.800 1 heaps totaling size 810.000000 MiB 00:07:48.800 size: 810.000000 MiB heap id: 0 00:07:48.800 end heaps---------- 00:07:48.800 9 mempools totaling size 595.772034 MiB 00:07:48.800 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:48.800 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:48.800 size: 92.545471 MiB name: bdev_io_3643494 00:07:48.800 size: 50.003479 MiB name: msgpool_3643494 00:07:48.800 size: 36.509338 MiB name: fsdev_io_3643494 00:07:48.800 size: 21.763794 MiB name: PDU_Pool 00:07:48.800 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:48.800 size: 4.133484 MiB name: evtpool_3643494 00:07:48.800 size: 0.026123 MiB name: Session_Pool 00:07:48.800 end mempools------- 00:07:48.800 6 memzones totaling size 4.142822 MiB 00:07:48.800 size: 1.000366 MiB name: RG_ring_0_3643494 00:07:48.800 size: 1.000366 MiB name: RG_ring_1_3643494 00:07:48.800 size: 1.000366 MiB name: RG_ring_4_3643494 00:07:48.800 size: 1.000366 MiB name: RG_ring_5_3643494 00:07:48.800 size: 0.125366 MiB name: RG_ring_2_3643494 00:07:48.800 size: 0.015991 MiB name: RG_ring_3_3643494 00:07:48.800 end memzones------- 00:07:48.800 09:59:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:48.800 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:48.800 list of free elements. size: 10.862488 MiB 00:07:48.800 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:48.800 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:48.801 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:48.801 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:48.801 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:48.801 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:48.801 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:48.801 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:48.801 element at address: 0x20001a600000 with size: 0.582886 MiB 00:07:48.801 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:48.801 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:48.801 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:48.801 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:48.801 element at address: 0x200027a00000 with size: 0.410034 MiB 00:07:48.801 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:48.801 list of standard malloc elements. size: 199.218628 MiB 00:07:48.801 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:48.801 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:48.801 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:48.801 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:48.801 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:48.801 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:48.801 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:48.801 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:48.801 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:48.801 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:48.801 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:48.801 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:48.801 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:48.801 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:48.801 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:48.801 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:48.801 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:48.801 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:48.801 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:48.801 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:48.801 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:48.801 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:48.801 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:48.801 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:48.801 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:48.801 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:48.801 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:48.801 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:48.801 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:48.801 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200027a69040 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:48.801 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:48.801 list of memzone associated elements. size: 599.918884 MiB 00:07:48.801 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:48.801 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:48.801 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:48.801 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:48.801 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:48.801 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3643494_0 00:07:48.801 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:48.801 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3643494_0 00:07:48.801 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:48.801 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3643494_0 00:07:48.801 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:48.801 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:48.801 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:48.801 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:48.801 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:48.801 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3643494_0 00:07:48.801 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:48.801 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3643494 00:07:48.801 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:48.801 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3643494 00:07:48.801 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:48.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:48.801 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:48.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:48.801 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:48.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:48.801 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:48.801 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:48.801 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:48.801 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3643494 00:07:48.801 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:48.801 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3643494 00:07:48.801 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:48.801 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3643494 00:07:48.801 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:48.801 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3643494 00:07:48.801 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:48.801 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3643494 00:07:48.801 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:48.801 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3643494 00:07:48.801 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:48.801 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:48.801 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:48.801 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:48.801 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:48.801 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:48.801 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:48.801 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3643494 00:07:48.801 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:48.801 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3643494 00:07:48.801 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:48.801 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:48.801 element at address: 0x200027a69100 with size: 0.023743 MiB 00:07:48.801 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:48.801 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:48.801 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3643494 00:07:48.801 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:07:48.801 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:48.801 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:48.801 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3643494 00:07:48.801 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:48.801 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3643494 00:07:48.801 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:48.801 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3643494 00:07:48.801 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:07:48.801 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:48.801 09:59:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:48.801 09:59:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3643494 00:07:48.801 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3643494 ']' 00:07:48.801 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3643494 00:07:48.801 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:07:48.801 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:48.801 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3643494 00:07:48.801 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:48.801 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:48.801 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3643494' 00:07:48.801 killing process with pid 3643494 00:07:48.801 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3643494 00:07:48.801 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3643494 00:07:49.062 00:07:49.062 real 0m1.406s 00:07:49.062 user 0m1.464s 00:07:49.062 sys 0m0.416s 00:07:49.062 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.062 09:59:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:49.062 ************************************ 00:07:49.062 END TEST dpdk_mem_utility 00:07:49.062 ************************************ 00:07:49.062 09:59:52 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:49.062 09:59:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:49.062 09:59:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.062 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:49.062 ************************************ 00:07:49.062 START TEST event 00:07:49.062 ************************************ 00:07:49.062 09:59:52 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:49.062 * Looking for test storage... 00:07:49.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:49.062 09:59:52 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:49.062 09:59:52 event -- common/autotest_common.sh@1691 -- # lcov --version 00:07:49.062 09:59:52 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:49.321 09:59:52 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:49.321 09:59:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.321 09:59:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.321 09:59:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.321 09:59:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.322 09:59:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.322 09:59:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.322 09:59:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.322 09:59:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.322 09:59:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.322 09:59:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.322 09:59:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.322 09:59:52 event -- scripts/common.sh@344 -- # case "$op" in 00:07:49.322 09:59:52 event -- scripts/common.sh@345 -- # : 1 00:07:49.322 09:59:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.322 09:59:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.322 09:59:52 event -- scripts/common.sh@365 -- # decimal 1 00:07:49.322 09:59:52 event -- scripts/common.sh@353 -- # local d=1 00:07:49.322 09:59:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.322 09:59:52 event -- scripts/common.sh@355 -- # echo 1 00:07:49.322 09:59:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.322 09:59:52 event -- scripts/common.sh@366 -- # decimal 2 00:07:49.322 09:59:52 event -- scripts/common.sh@353 -- # local d=2 00:07:49.322 09:59:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.322 09:59:52 event -- scripts/common.sh@355 -- # echo 2 00:07:49.322 09:59:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.322 09:59:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.322 09:59:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.322 09:59:52 event -- scripts/common.sh@368 -- # return 0 00:07:49.322 09:59:52 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.322 09:59:52 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:49.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.322 --rc genhtml_branch_coverage=1 00:07:49.322 --rc genhtml_function_coverage=1 00:07:49.322 --rc genhtml_legend=1 00:07:49.322 --rc geninfo_all_blocks=1 00:07:49.322 --rc geninfo_unexecuted_blocks=1 00:07:49.322 00:07:49.322 ' 00:07:49.322 09:59:52 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:49.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.322 --rc genhtml_branch_coverage=1 00:07:49.322 --rc genhtml_function_coverage=1 00:07:49.322 --rc genhtml_legend=1 00:07:49.322 --rc geninfo_all_blocks=1 00:07:49.322 --rc geninfo_unexecuted_blocks=1 00:07:49.322 00:07:49.322 ' 00:07:49.322 09:59:52 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:49.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.322 --rc genhtml_branch_coverage=1 00:07:49.322 --rc genhtml_function_coverage=1 00:07:49.322 --rc genhtml_legend=1 00:07:49.322 --rc geninfo_all_blocks=1 00:07:49.322 --rc geninfo_unexecuted_blocks=1 00:07:49.322 00:07:49.322 ' 00:07:49.322 09:59:52 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:49.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.322 --rc genhtml_branch_coverage=1 00:07:49.322 --rc genhtml_function_coverage=1 00:07:49.322 --rc genhtml_legend=1 00:07:49.322 --rc geninfo_all_blocks=1 00:07:49.322 --rc geninfo_unexecuted_blocks=1 00:07:49.322 00:07:49.322 ' 00:07:49.322 09:59:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:49.322 09:59:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:49.322 09:59:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:49.322 09:59:52 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:49.322 09:59:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.322 09:59:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:49.322 ************************************ 00:07:49.322 START TEST event_perf 00:07:49.322 ************************************ 00:07:49.322 09:59:52 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:49.322 Running I/O for 1 seconds...[2024-11-06 09:59:52.676848] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:49.322 [2024-11-06 09:59:52.676960] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643893 ] 00:07:49.322 [2024-11-06 09:59:52.759847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.322 [2024-11-06 09:59:52.799445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.322 [2024-11-06 09:59:52.799558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.322 [2024-11-06 09:59:52.799715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.322 Running I/O for 1 seconds...[2024-11-06 09:59:52.799715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.704 00:07:50.704 lcore 0: 181166 00:07:50.704 lcore 1: 181166 00:07:50.704 lcore 2: 181164 00:07:50.704 lcore 3: 181167 00:07:50.704 done. 00:07:50.704 00:07:50.704 real 0m1.178s 00:07:50.704 user 0m4.099s 00:07:50.704 sys 0m0.076s 00:07:50.704 09:59:53 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.704 09:59:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:50.704 ************************************ 00:07:50.704 END TEST event_perf 00:07:50.704 ************************************ 00:07:50.704 09:59:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:50.704 09:59:53 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:50.704 09:59:53 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.704 09:59:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:50.704 ************************************ 00:07:50.704 START TEST event_reactor 00:07:50.704 ************************************ 00:07:50.704 09:59:53 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:50.704 [2024-11-06 09:59:53.932779] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:50.704 [2024-11-06 09:59:53.932927] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644245 ] 00:07:50.704 [2024-11-06 09:59:54.014939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.704 [2024-11-06 09:59:54.049045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.646 test_start 00:07:51.646 oneshot 00:07:51.646 tick 100 00:07:51.646 tick 100 00:07:51.646 tick 250 00:07:51.646 tick 100 00:07:51.646 tick 100 00:07:51.646 tick 250 00:07:51.646 tick 100 00:07:51.646 tick 500 00:07:51.646 tick 100 00:07:51.646 tick 100 00:07:51.646 tick 250 00:07:51.646 tick 100 00:07:51.646 tick 100 00:07:51.646 test_end 00:07:51.646 00:07:51.646 real 0m1.169s 00:07:51.646 user 0m1.096s 00:07:51.646 sys 0m0.069s 00:07:51.646 09:59:55 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.646 09:59:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:51.646 ************************************ 00:07:51.646 END TEST event_reactor 00:07:51.646 ************************************ 00:07:51.646 09:59:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:51.646 09:59:55 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:51.646 09:59:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.646 09:59:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:51.906 ************************************ 00:07:51.906 START TEST event_reactor_perf 00:07:51.906 ************************************ 00:07:51.906 09:59:55 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:51.906 [2024-11-06 09:59:55.179859] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:51.906 [2024-11-06 09:59:55.179994] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644416 ] 00:07:51.906 [2024-11-06 09:59:55.271165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.906 [2024-11-06 09:59:55.310911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.846 test_start 00:07:52.846 test_end 00:07:52.846 Performance: 368006 events per second 00:07:52.846 00:07:52.846 real 0m1.185s 00:07:52.846 user 0m1.100s 00:07:52.846 sys 0m0.081s 00:07:52.846 09:59:56 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.846 09:59:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:52.846 ************************************ 00:07:52.846 END TEST event_reactor_perf 00:07:52.846 ************************************ 00:07:53.107 09:59:56 event -- event/event.sh@49 -- # uname -s 00:07:53.107 09:59:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:53.107 09:59:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:53.107 09:59:56 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:53.107 09:59:56 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:53.107 09:59:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:53.107 ************************************ 00:07:53.107 START TEST event_scheduler 00:07:53.107 ************************************ 00:07:53.107 09:59:56 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:53.107 * Looking for test storage... 00:07:53.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:53.107 09:59:56 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:53.107 09:59:56 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:07:53.107 09:59:56 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:53.107 09:59:56 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.107 09:59:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:53.368 09:59:56 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.368 09:59:56 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:53.368 09:59:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:53.368 09:59:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.368 09:59:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:53.368 09:59:56 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.368 09:59:56 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.368 09:59:56 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.368 09:59:56 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:53.368 09:59:56 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.368 09:59:56 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.368 --rc genhtml_branch_coverage=1 00:07:53.368 --rc genhtml_function_coverage=1 00:07:53.368 --rc genhtml_legend=1 00:07:53.368 --rc geninfo_all_blocks=1 00:07:53.368 --rc geninfo_unexecuted_blocks=1 00:07:53.368 00:07:53.368 ' 00:07:53.368 09:59:56 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.368 --rc genhtml_branch_coverage=1 00:07:53.368 --rc genhtml_function_coverage=1 00:07:53.368 --rc genhtml_legend=1 00:07:53.368 --rc geninfo_all_blocks=1 00:07:53.368 --rc geninfo_unexecuted_blocks=1 00:07:53.368 00:07:53.368 ' 00:07:53.368 09:59:56 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.368 --rc genhtml_branch_coverage=1 00:07:53.368 --rc genhtml_function_coverage=1 00:07:53.368 --rc genhtml_legend=1 00:07:53.368 --rc geninfo_all_blocks=1 00:07:53.368 --rc geninfo_unexecuted_blocks=1 00:07:53.368 00:07:53.368 ' 00:07:53.368 09:59:56 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:53.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.368 --rc genhtml_branch_coverage=1 00:07:53.368 --rc genhtml_function_coverage=1 00:07:53.368 --rc genhtml_legend=1 00:07:53.368 --rc geninfo_all_blocks=1 00:07:53.368 --rc geninfo_unexecuted_blocks=1 00:07:53.368 00:07:53.368 ' 00:07:53.368 09:59:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:53.368 09:59:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3644685 00:07:53.368 09:59:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:53.368 09:59:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:53.368 09:59:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3644685 00:07:53.368 09:59:56 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3644685 ']' 00:07:53.368 09:59:56 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.368 09:59:56 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:53.368 09:59:56 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.368 09:59:56 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:53.368 09:59:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:53.369 [2024-11-06 09:59:56.679999] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:53.369 [2024-11-06 09:59:56.680090] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644685 ] 00:07:53.369 [2024-11-06 09:59:56.753271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.369 [2024-11-06 09:59:56.792641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.369 [2024-11-06 09:59:56.792805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.369 [2024-11-06 09:59:56.792963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.369 [2024-11-06 09:59:56.792964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.308 09:59:57 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:54.308 09:59:57 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:07:54.308 09:59:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:54.308 09:59:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:54.308 [2024-11-06 09:59:57.491162] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:54.308 [2024-11-06 09:59:57.491178] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:54.308 [2024-11-06 09:59:57.491186] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:54.308 [2024-11-06 09:59:57.491190] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:54.308 [2024-11-06 09:59:57.491194] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:54.308 09:59:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.308 09:59:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:54.308 09:59:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:54.308 [2024-11-06 09:59:57.550774] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:54.308 09:59:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.308 09:59:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:54.308 09:59:57 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:54.308 09:59:57 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:54.308 ************************************ 00:07:54.308 START TEST scheduler_create_thread 00:07:54.308 ************************************ 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.308 2 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.308 3 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.308 4 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.308 5 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.308 6 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.308 7 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.308 8 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.308 9 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.308 09:59:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.880 10 00:07:54.880 09:59:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.880 09:59:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:54.880 09:59:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.880 09:59:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:56.264 09:59:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.264 09:59:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:56.264 09:59:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:56.264 09:59:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.264 09:59:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:56.834 10:00:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.834 10:00:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:56.834 10:00:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.834 10:00:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.775 10:00:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.775 10:00:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:57.775 10:00:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:57.775 10:00:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.775 10:00:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.348 10:00:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.348 00:07:58.348 real 0m4.226s 00:07:58.348 user 0m0.022s 00:07:58.348 sys 0m0.009s 00:07:58.348 10:00:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.348 10:00:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.348 ************************************ 00:07:58.348 END TEST scheduler_create_thread 00:07:58.348 ************************************ 00:07:58.609 10:00:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:58.609 10:00:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3644685 00:07:58.609 10:00:01 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3644685 ']' 00:07:58.609 10:00:01 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3644685 00:07:58.609 10:00:01 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:07:58.609 10:00:01 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:58.609 10:00:01 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3644685 00:07:58.609 10:00:01 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:58.609 10:00:01 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:58.609 10:00:01 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3644685' 00:07:58.609 killing process with pid 3644685 00:07:58.609 10:00:01 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3644685 00:07:58.609 10:00:01 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3644685 00:07:58.609 [2024-11-06 10:00:02.096084] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:58.869 00:07:58.869 real 0m5.830s 00:07:58.869 user 0m12.989s 00:07:58.869 sys 0m0.396s 00:07:58.869 10:00:02 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.869 10:00:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:58.869 ************************************ 00:07:58.869 END TEST event_scheduler 00:07:58.869 ************************************ 00:07:58.869 10:00:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:58.869 10:00:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:58.869 10:00:02 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.869 10:00:02 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.869 10:00:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:58.869 ************************************ 00:07:58.869 START TEST app_repeat 00:07:58.869 ************************************ 00:07:58.869 10:00:02 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:07:58.869 10:00:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.869 10:00:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.869 10:00:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:58.869 10:00:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:58.869 10:00:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:58.869 10:00:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:58.869 10:00:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:58.869 10:00:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3646166 00:07:58.869 10:00:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:58.870 10:00:02 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:58.870 10:00:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3646166' 00:07:58.870 Process app_repeat pid: 3646166 00:07:58.870 10:00:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:58.870 10:00:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:58.870 spdk_app_start Round 0 00:07:58.870 10:00:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3646166 /var/tmp/spdk-nbd.sock 00:07:58.870 10:00:02 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3646166 ']' 00:07:58.870 10:00:02 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:58.870 10:00:02 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:58.870 10:00:02 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:58.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:58.870 10:00:02 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:58.870 10:00:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:59.129 [2024-11-06 10:00:02.376665] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:59.129 [2024-11-06 10:00:02.376726] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646166 ] 00:07:59.129 [2024-11-06 10:00:02.458432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.129 [2024-11-06 10:00:02.499011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.129 [2024-11-06 10:00:02.499014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.130 10:00:02 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:59.130 10:00:02 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:59.130 10:00:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:59.390 Malloc0 00:07:59.391 10:00:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:59.651 Malloc1 00:07:59.651 10:00:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:59.651 10:00:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:59.651 /dev/nbd0 00:07:59.913 10:00:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:59.913 10:00:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:59.913 1+0 records in 00:07:59.913 1+0 records out 00:07:59.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211617 s, 19.4 MB/s 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:59.913 10:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.913 10:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:59.913 10:00:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:59.913 /dev/nbd1 00:07:59.913 10:00:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:59.913 10:00:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:59.913 1+0 records in 00:07:59.913 1+0 records out 00:07:59.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283469 s, 14.4 MB/s 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:59.913 10:00:03 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:00.174 10:00:03 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:00.174 10:00:03 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:00.174 { 00:08:00.174 "nbd_device": "/dev/nbd0", 00:08:00.174 "bdev_name": "Malloc0" 00:08:00.174 }, 00:08:00.174 { 00:08:00.174 "nbd_device": "/dev/nbd1", 00:08:00.174 "bdev_name": "Malloc1" 00:08:00.174 } 00:08:00.174 ]' 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:00.174 { 00:08:00.174 "nbd_device": "/dev/nbd0", 00:08:00.174 "bdev_name": "Malloc0" 00:08:00.174 }, 00:08:00.174 { 00:08:00.174 "nbd_device": "/dev/nbd1", 00:08:00.174 "bdev_name": "Malloc1" 00:08:00.174 } 00:08:00.174 ]' 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:00.174 /dev/nbd1' 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:00.174 /dev/nbd1' 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:00.174 256+0 records in 00:08:00.174 256+0 records out 00:08:00.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012645 s, 82.9 MB/s 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.174 10:00:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:00.435 256+0 records in 00:08:00.435 256+0 records out 00:08:00.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167284 s, 62.7 MB/s 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:00.435 256+0 records in 00:08:00.435 256+0 records out 00:08:00.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184381 s, 56.9 MB/s 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.435 10:00:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:00.695 10:00:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:00.695 10:00:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:00.695 10:00:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:00.695 10:00:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.695 10:00:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.695 10:00:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:00.695 10:00:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:00.695 10:00:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.695 10:00:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.695 10:00:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.695 10:00:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:00.980 10:00:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:00.980 10:00:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:01.241 10:00:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:01.241 [2024-11-06 10:00:04.629443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:01.241 [2024-11-06 10:00:04.664593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.241 [2024-11-06 10:00:04.664595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.241 [2024-11-06 10:00:04.696329] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:01.241 [2024-11-06 10:00:04.696365] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:04.538 10:00:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:04.538 10:00:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:04.538 spdk_app_start Round 1 00:08:04.538 10:00:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3646166 /var/tmp/spdk-nbd.sock 00:08:04.538 10:00:07 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3646166 ']' 00:08:04.538 10:00:07 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:04.538 10:00:07 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:04.538 10:00:07 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:04.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:04.538 10:00:07 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:04.538 10:00:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:04.538 10:00:07 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:04.538 10:00:07 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:04.538 10:00:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:04.538 Malloc0 00:08:04.538 10:00:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:04.538 Malloc1 00:08:04.538 10:00:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:04.538 10:00:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:04.799 /dev/nbd0 00:08:04.800 10:00:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:04.800 10:00:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:04.800 1+0 records in 00:08:04.800 1+0 records out 00:08:04.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276081 s, 14.8 MB/s 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:04.800 10:00:08 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:04.800 10:00:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.800 10:00:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:04.800 10:00:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:05.060 /dev/nbd1 00:08:05.060 10:00:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:05.060 10:00:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:05.060 1+0 records in 00:08:05.060 1+0 records out 00:08:05.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179458 s, 22.8 MB/s 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:05.060 10:00:08 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:05.060 10:00:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.060 10:00:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:05.060 10:00:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:05.060 10:00:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.060 10:00:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:05.322 { 00:08:05.322 "nbd_device": "/dev/nbd0", 00:08:05.322 "bdev_name": "Malloc0" 00:08:05.322 }, 00:08:05.322 { 00:08:05.322 "nbd_device": "/dev/nbd1", 00:08:05.322 "bdev_name": "Malloc1" 00:08:05.322 } 00:08:05.322 ]' 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:05.322 { 00:08:05.322 "nbd_device": "/dev/nbd0", 00:08:05.322 "bdev_name": "Malloc0" 00:08:05.322 }, 00:08:05.322 { 00:08:05.322 "nbd_device": "/dev/nbd1", 00:08:05.322 "bdev_name": "Malloc1" 00:08:05.322 } 00:08:05.322 ]' 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:05.322 /dev/nbd1' 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:05.322 /dev/nbd1' 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:05.322 256+0 records in 00:08:05.322 256+0 records out 00:08:05.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117603 s, 89.2 MB/s 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:05.322 256+0 records in 00:08:05.322 256+0 records out 00:08:05.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179194 s, 58.5 MB/s 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:05.322 256+0 records in 00:08:05.322 256+0 records out 00:08:05.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181171 s, 57.9 MB/s 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.322 10:00:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.582 10:00:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.582 10:00:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.582 10:00:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.582 10:00:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.582 10:00:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.582 10:00:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.582 10:00:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:05.582 10:00:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.582 10:00:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.582 10:00:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:05.843 10:00:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:05.843 10:00:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:05.843 10:00:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:05.843 10:00:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.843 10:00:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.843 10:00:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:05.843 10:00:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:05.843 10:00:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.843 10:00:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:05.843 10:00:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.843 10:00:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:06.103 10:00:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:06.103 10:00:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:06.103 10:00:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:06.365 [2024-11-06 10:00:09.707207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:06.365 [2024-11-06 10:00:09.743313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.365 [2024-11-06 10:00:09.743316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.365 [2024-11-06 10:00:09.775738] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:06.365 [2024-11-06 10:00:09.775774] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:09.659 10:00:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:09.659 10:00:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:09.659 spdk_app_start Round 2 00:08:09.659 10:00:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3646166 /var/tmp/spdk-nbd.sock 00:08:09.659 10:00:12 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3646166 ']' 00:08:09.659 10:00:12 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:09.659 10:00:12 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.659 10:00:12 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:09.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:09.659 10:00:12 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.659 10:00:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:09.659 10:00:12 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:09.659 10:00:12 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:09.659 10:00:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:09.659 Malloc0 00:08:09.659 10:00:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:09.659 Malloc1 00:08:09.659 10:00:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:09.659 10:00:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:09.920 /dev/nbd0 00:08:09.920 10:00:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:09.920 10:00:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:09.920 1+0 records in 00:08:09.920 1+0 records out 00:08:09.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234559 s, 17.5 MB/s 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:09.920 10:00:13 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:09.920 10:00:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.920 10:00:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:09.920 10:00:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:10.181 /dev/nbd1 00:08:10.181 10:00:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:10.181 10:00:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:10.181 10:00:13 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:10.182 1+0 records in 00:08:10.182 1+0 records out 00:08:10.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239544 s, 17.1 MB/s 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:10.182 10:00:13 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:10.182 10:00:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.182 10:00:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:10.182 10:00:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.182 10:00:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.182 10:00:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.442 10:00:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:10.442 { 00:08:10.442 "nbd_device": "/dev/nbd0", 00:08:10.442 "bdev_name": "Malloc0" 00:08:10.442 }, 00:08:10.442 { 00:08:10.442 "nbd_device": "/dev/nbd1", 00:08:10.442 "bdev_name": "Malloc1" 00:08:10.442 } 00:08:10.442 ]' 00:08:10.442 10:00:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:10.442 { 00:08:10.442 "nbd_device": "/dev/nbd0", 00:08:10.442 "bdev_name": "Malloc0" 00:08:10.442 }, 00:08:10.442 { 00:08:10.442 "nbd_device": "/dev/nbd1", 00:08:10.442 "bdev_name": "Malloc1" 00:08:10.442 } 00:08:10.442 ]' 00:08:10.442 10:00:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.442 10:00:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:10.442 /dev/nbd1' 00:08:10.442 10:00:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:10.442 /dev/nbd1' 00:08:10.442 10:00:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.442 10:00:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:10.442 10:00:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:10.442 10:00:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:10.442 10:00:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:10.443 256+0 records in 00:08:10.443 256+0 records out 00:08:10.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011914 s, 88.0 MB/s 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:10.443 256+0 records in 00:08:10.443 256+0 records out 00:08:10.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167678 s, 62.5 MB/s 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:10.443 256+0 records in 00:08:10.443 256+0 records out 00:08:10.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0347532 s, 30.2 MB/s 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.443 10:00:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:10.704 10:00:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:10.704 10:00:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:10.704 10:00:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:10.704 10:00:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.704 10:00:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.704 10:00:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:10.704 10:00:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:10.704 10:00:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.704 10:00:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.704 10:00:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:10.964 10:00:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:10.964 10:00:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:11.225 10:00:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:11.486 [2024-11-06 10:00:14.757420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:11.486 [2024-11-06 10:00:14.792561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.486 [2024-11-06 10:00:14.792563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.486 [2024-11-06 10:00:14.824238] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:11.486 [2024-11-06 10:00:14.824281] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:14.875 10:00:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3646166 /var/tmp/spdk-nbd.sock 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3646166 ']' 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:14.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:14.875 10:00:17 event.app_repeat -- event/event.sh@39 -- # killprocess 3646166 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3646166 ']' 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3646166 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3646166 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3646166' 00:08:14.875 killing process with pid 3646166 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3646166 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3646166 00:08:14.875 spdk_app_start is called in Round 0. 00:08:14.875 Shutdown signal received, stop current app iteration 00:08:14.875 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:08:14.875 spdk_app_start is called in Round 1. 00:08:14.875 Shutdown signal received, stop current app iteration 00:08:14.875 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:08:14.875 spdk_app_start is called in Round 2. 00:08:14.875 Shutdown signal received, stop current app iteration 00:08:14.875 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:08:14.875 spdk_app_start is called in Round 3. 00:08:14.875 Shutdown signal received, stop current app iteration 00:08:14.875 10:00:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:14.875 10:00:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:14.875 00:08:14.875 real 0m15.638s 00:08:14.875 user 0m34.029s 00:08:14.875 sys 0m2.309s 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.875 10:00:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:14.875 ************************************ 00:08:14.875 END TEST app_repeat 00:08:14.875 ************************************ 00:08:14.875 10:00:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:14.875 10:00:18 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:14.875 10:00:18 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.875 10:00:18 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.875 10:00:18 event -- common/autotest_common.sh@10 -- # set +x 00:08:14.875 ************************************ 00:08:14.875 START TEST cpu_locks 00:08:14.875 ************************************ 00:08:14.875 10:00:18 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:14.875 * Looking for test storage... 00:08:14.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:14.875 10:00:18 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:14.875 10:00:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:08:14.875 10:00:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:14.875 10:00:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.875 10:00:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.876 10:00:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:14.876 10:00:18 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.876 10:00:18 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:14.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.876 --rc genhtml_branch_coverage=1 00:08:14.876 --rc genhtml_function_coverage=1 00:08:14.876 --rc genhtml_legend=1 00:08:14.876 --rc geninfo_all_blocks=1 00:08:14.876 --rc geninfo_unexecuted_blocks=1 00:08:14.876 00:08:14.876 ' 00:08:14.876 10:00:18 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:14.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.876 --rc genhtml_branch_coverage=1 00:08:14.876 --rc genhtml_function_coverage=1 00:08:14.876 --rc genhtml_legend=1 00:08:14.876 --rc geninfo_all_blocks=1 00:08:14.876 --rc geninfo_unexecuted_blocks=1 00:08:14.876 00:08:14.876 ' 00:08:14.876 10:00:18 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:14.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.876 --rc genhtml_branch_coverage=1 00:08:14.876 --rc genhtml_function_coverage=1 00:08:14.876 --rc genhtml_legend=1 00:08:14.876 --rc geninfo_all_blocks=1 00:08:14.876 --rc geninfo_unexecuted_blocks=1 00:08:14.876 00:08:14.876 ' 00:08:14.876 10:00:18 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:14.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.876 --rc genhtml_branch_coverage=1 00:08:14.876 --rc genhtml_function_coverage=1 00:08:14.876 --rc genhtml_legend=1 00:08:14.876 --rc geninfo_all_blocks=1 00:08:14.876 --rc geninfo_unexecuted_blocks=1 00:08:14.876 00:08:14.876 ' 00:08:14.876 10:00:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:14.876 10:00:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:14.876 10:00:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:14.876 10:00:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:14.876 10:00:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.876 10:00:18 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.876 10:00:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:14.876 ************************************ 00:08:14.876 START TEST default_locks 00:08:14.876 ************************************ 00:08:14.876 10:00:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:08:14.876 10:00:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3649877 00:08:14.876 10:00:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3649877 00:08:14.876 10:00:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3649877 ']' 00:08:14.876 10:00:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.876 10:00:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:14.876 10:00:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.876 10:00:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:14.876 10:00:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:14.876 10:00:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:14.876 [2024-11-06 10:00:18.335967] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:14.876 [2024-11-06 10:00:18.336028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649877 ] 00:08:15.137 [2024-11-06 10:00:18.418214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.137 [2024-11-06 10:00:18.459907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.712 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:15.712 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:08:15.712 10:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3649877 00:08:15.712 10:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3649877 00:08:15.712 10:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:15.974 lslocks: write error 00:08:15.974 10:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3649877 00:08:15.974 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3649877 ']' 00:08:15.974 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3649877 00:08:15.974 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:08:15.974 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:15.974 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3649877 00:08:15.974 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:15.974 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:15.974 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3649877' 00:08:15.974 killing process with pid 3649877 00:08:15.974 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3649877 00:08:15.974 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3649877 00:08:16.235 10:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3649877 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3649877 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3649877 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3649877 ']' 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3649877) - No such process 00:08:16.236 ERROR: process (pid: 3649877) is no longer running 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:16.236 00:08:16.236 real 0m1.257s 00:08:16.236 user 0m1.359s 00:08:16.236 sys 0m0.401s 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.236 10:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.236 ************************************ 00:08:16.236 END TEST default_locks 00:08:16.236 ************************************ 00:08:16.236 10:00:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:16.236 10:00:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:16.236 10:00:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.236 10:00:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.236 ************************************ 00:08:16.236 START TEST default_locks_via_rpc 00:08:16.236 ************************************ 00:08:16.236 10:00:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:08:16.236 10:00:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3650242 00:08:16.236 10:00:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3650242 00:08:16.236 10:00:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:16.236 10:00:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3650242 ']' 00:08:16.236 10:00:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.236 10:00:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.236 10:00:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.236 10:00:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.236 10:00:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.236 [2024-11-06 10:00:19.668240] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:16.236 [2024-11-06 10:00:19.668301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650242 ] 00:08:16.496 [2024-11-06 10:00:19.749959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.496 [2024-11-06 10:00:19.791470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.066 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3650242 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:17.067 10:00:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3650242 00:08:17.639 10:00:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3650242 00:08:17.639 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3650242 ']' 00:08:17.639 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3650242 00:08:17.639 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:08:17.639 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:17.639 10:00:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3650242 00:08:17.639 10:00:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:17.639 10:00:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:17.639 10:00:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3650242' 00:08:17.639 killing process with pid 3650242 00:08:17.639 10:00:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3650242 00:08:17.639 10:00:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3650242 00:08:17.900 00:08:17.900 real 0m1.645s 00:08:17.900 user 0m1.779s 00:08:17.900 sys 0m0.553s 00:08:17.900 10:00:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.900 10:00:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.900 ************************************ 00:08:17.900 END TEST default_locks_via_rpc 00:08:17.900 ************************************ 00:08:17.900 10:00:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:17.900 10:00:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:17.900 10:00:21 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.900 10:00:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:17.900 ************************************ 00:08:17.900 START TEST non_locking_app_on_locked_coremask 00:08:17.900 ************************************ 00:08:17.900 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:08:17.900 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3650612 00:08:17.900 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3650612 /var/tmp/spdk.sock 00:08:17.900 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:17.900 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3650612 ']' 00:08:17.900 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.900 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:17.900 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.900 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:17.900 10:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.900 [2024-11-06 10:00:21.392313] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:17.900 [2024-11-06 10:00:21.392365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650612 ] 00:08:18.161 [2024-11-06 10:00:21.472538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.161 [2024-11-06 10:00:21.513096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.733 10:00:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:18.733 10:00:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:18.733 10:00:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:18.733 10:00:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3650773 00:08:18.733 10:00:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3650773 /var/tmp/spdk2.sock 00:08:18.733 10:00:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3650773 ']' 00:08:18.733 10:00:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:18.733 10:00:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:18.733 10:00:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:18.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:18.733 10:00:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:18.733 10:00:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.733 [2024-11-06 10:00:22.216424] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:18.733 [2024-11-06 10:00:22.216476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650773 ] 00:08:18.995 [2024-11-06 10:00:22.339539] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:18.995 [2024-11-06 10:00:22.339571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.995 [2024-11-06 10:00:22.412241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.565 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:19.565 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:19.565 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3650612 00:08:19.565 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3650612 00:08:19.565 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:20.135 lslocks: write error 00:08:20.135 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3650612 00:08:20.135 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3650612 ']' 00:08:20.135 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3650612 00:08:20.135 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:20.135 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:20.135 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3650612 00:08:20.136 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:20.136 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:20.136 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3650612' 00:08:20.136 killing process with pid 3650612 00:08:20.136 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3650612 00:08:20.136 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3650612 00:08:20.707 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3650773 00:08:20.707 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3650773 ']' 00:08:20.707 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3650773 00:08:20.707 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:20.707 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:20.707 10:00:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3650773 00:08:20.707 10:00:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:20.707 10:00:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:20.707 10:00:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3650773' 00:08:20.707 killing process with pid 3650773 00:08:20.707 10:00:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3650773 00:08:20.707 10:00:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3650773 00:08:20.969 00:08:20.969 real 0m2.922s 00:08:20.969 user 0m3.228s 00:08:20.969 sys 0m0.891s 00:08:20.969 10:00:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.969 10:00:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.969 ************************************ 00:08:20.969 END TEST non_locking_app_on_locked_coremask 00:08:20.969 ************************************ 00:08:20.969 10:00:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:20.969 10:00:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:20.969 10:00:24 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.969 10:00:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.969 ************************************ 00:08:20.969 START TEST locking_app_on_unlocked_coremask 00:08:20.969 ************************************ 00:08:20.969 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:08:20.969 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3651319 00:08:20.969 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3651319 /var/tmp/spdk.sock 00:08:20.969 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:20.969 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3651319 ']' 00:08:20.969 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.969 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:20.969 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.969 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:20.969 10:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.969 [2024-11-06 10:00:24.390580] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:20.969 [2024-11-06 10:00:24.390631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651319 ] 00:08:20.969 [2024-11-06 10:00:24.469180] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:20.969 [2024-11-06 10:00:24.469214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.229 [2024-11-06 10:00:24.504485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.801 10:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.801 10:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:21.801 10:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:21.802 10:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3651339 00:08:21.802 10:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3651339 /var/tmp/spdk2.sock 00:08:21.802 10:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3651339 ']' 00:08:21.802 10:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:21.802 10:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:21.802 10:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:21.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:21.802 10:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:21.802 10:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.802 [2024-11-06 10:00:25.212784] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:21.802 [2024-11-06 10:00:25.212826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651339 ] 00:08:22.062 [2024-11-06 10:00:25.326510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.062 [2024-11-06 10:00:25.399138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.634 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:22.634 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:22.634 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3651339 00:08:22.634 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3651339 00:08:22.634 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:22.895 lslocks: write error 00:08:22.895 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3651319 00:08:22.895 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3651319 ']' 00:08:22.895 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3651319 00:08:22.895 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:22.895 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:22.895 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3651319 00:08:22.895 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:22.895 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:22.895 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3651319' 00:08:22.895 killing process with pid 3651319 00:08:22.895 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3651319 00:08:22.895 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3651319 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3651339 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3651339 ']' 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3651339 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3651339 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3651339' 00:08:23.467 killing process with pid 3651339 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3651339 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3651339 00:08:23.467 00:08:23.467 real 0m2.612s 00:08:23.467 user 0m2.910s 00:08:23.467 sys 0m0.694s 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:23.467 10:00:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:23.467 ************************************ 00:08:23.467 END TEST locking_app_on_unlocked_coremask 00:08:23.467 ************************************ 00:08:23.727 10:00:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:23.727 10:00:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:23.727 10:00:26 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:23.727 10:00:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:23.727 ************************************ 00:08:23.727 START TEST locking_app_on_locked_coremask 00:08:23.727 ************************************ 00:08:23.727 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:08:23.727 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3651725 00:08:23.727 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3651725 /var/tmp/spdk.sock 00:08:23.727 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:23.727 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3651725 ']' 00:08:23.727 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.727 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:23.728 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.728 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:23.728 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:23.728 [2024-11-06 10:00:27.079853] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:23.728 [2024-11-06 10:00:27.079926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651725 ] 00:08:23.728 [2024-11-06 10:00:27.159929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.728 [2024-11-06 10:00:27.200306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3652037 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3652037 /var/tmp/spdk2.sock 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3652037 /var/tmp/spdk2.sock 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3652037 /var/tmp/spdk2.sock 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3652037 ']' 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:24.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.671 10:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.671 [2024-11-06 10:00:27.898335] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:24.671 [2024-11-06 10:00:27.898387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652037 ] 00:08:24.671 [2024-11-06 10:00:28.020444] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3651725 has claimed it. 00:08:24.671 [2024-11-06 10:00:28.020483] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:25.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3652037) - No such process 00:08:25.242 ERROR: process (pid: 3652037) is no longer running 00:08:25.242 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:25.242 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:25.242 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:25.242 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.242 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.242 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.242 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3651725 00:08:25.242 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3651725 00:08:25.242 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:25.502 lslocks: write error 00:08:25.502 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3651725 00:08:25.503 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3651725 ']' 00:08:25.503 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3651725 00:08:25.503 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:25.503 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:25.503 10:00:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3651725 00:08:25.763 10:00:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:25.763 10:00:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:25.763 10:00:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3651725' 00:08:25.763 killing process with pid 3651725 00:08:25.763 10:00:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3651725 00:08:25.763 10:00:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3651725 00:08:25.763 00:08:25.763 real 0m2.215s 00:08:25.763 user 0m2.504s 00:08:25.763 sys 0m0.588s 00:08:25.763 10:00:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:25.763 10:00:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.763 ************************************ 00:08:25.763 END TEST locking_app_on_locked_coremask 00:08:25.763 ************************************ 00:08:26.024 10:00:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:26.024 10:00:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:26.024 10:00:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:26.024 10:00:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:26.024 ************************************ 00:08:26.024 START TEST locking_overlapped_coremask 00:08:26.024 ************************************ 00:08:26.024 10:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:08:26.024 10:00:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3652401 00:08:26.024 10:00:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3652401 /var/tmp/spdk.sock 00:08:26.024 10:00:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:26.024 10:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3652401 ']' 00:08:26.024 10:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.024 10:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:26.024 10:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.024 10:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:26.024 10:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:26.024 [2024-11-06 10:00:29.370907] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:26.024 [2024-11-06 10:00:29.370961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652401 ] 00:08:26.024 [2024-11-06 10:00:29.451911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:26.024 [2024-11-06 10:00:29.494749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.024 [2024-11-06 10:00:29.494892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.024 [2024-11-06 10:00:29.494913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3652421 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3652421 /var/tmp/spdk2.sock 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3652421 /var/tmp/spdk2.sock 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.963 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3652421 /var/tmp/spdk2.sock 00:08:26.964 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3652421 ']' 00:08:26.964 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:26.964 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:26.964 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:26.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:26.964 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:26.964 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:26.964 [2024-11-06 10:00:30.226697] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:26.964 [2024-11-06 10:00:30.226750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652421 ] 00:08:26.964 [2024-11-06 10:00:30.324548] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3652401 has claimed it. 00:08:26.964 [2024-11-06 10:00:30.324582] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:27.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3652421) - No such process 00:08:27.537 ERROR: process (pid: 3652421) is no longer running 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3652401 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3652401 ']' 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3652401 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3652401 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3652401' 00:08:27.537 killing process with pid 3652401 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3652401 00:08:27.537 10:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3652401 00:08:27.799 00:08:27.799 real 0m1.802s 00:08:27.799 user 0m5.198s 00:08:27.799 sys 0m0.395s 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.799 ************************************ 00:08:27.799 END TEST locking_overlapped_coremask 00:08:27.799 ************************************ 00:08:27.799 10:00:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:27.799 10:00:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:27.799 10:00:31 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.799 10:00:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:27.799 ************************************ 00:08:27.799 START TEST locking_overlapped_coremask_via_rpc 00:08:27.799 ************************************ 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3652777 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3652777 /var/tmp/spdk.sock 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3652777 ']' 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.799 10:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.799 [2024-11-06 10:00:31.245837] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:27.799 [2024-11-06 10:00:31.245892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652777 ] 00:08:28.060 [2024-11-06 10:00:31.324724] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:28.060 [2024-11-06 10:00:31.324756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.060 [2024-11-06 10:00:31.362472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.060 [2024-11-06 10:00:31.362588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.060 [2024-11-06 10:00:31.362590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.631 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:28.631 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:28.631 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3652807 00:08:28.631 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3652807 /var/tmp/spdk2.sock 00:08:28.631 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3652807 ']' 00:08:28.631 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:28.631 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:28.631 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:28.631 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:28.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:28.631 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:28.631 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.631 [2024-11-06 10:00:32.098297] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:28.631 [2024-11-06 10:00:32.098349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652807 ] 00:08:28.891 [2024-11-06 10:00:32.197461] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:28.891 [2024-11-06 10:00:32.197488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.891 [2024-11-06 10:00:32.256897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.891 [2024-11-06 10:00:32.259921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.891 [2024-11-06 10:00:32.259923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.464 [2024-11-06 10:00:32.900923] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3652777 has claimed it. 00:08:29.464 request: 00:08:29.464 { 00:08:29.464 "method": "framework_enable_cpumask_locks", 00:08:29.464 "req_id": 1 00:08:29.464 } 00:08:29.464 Got JSON-RPC error response 00:08:29.464 response: 00:08:29.464 { 00:08:29.464 "code": -32603, 00:08:29.464 "message": "Failed to claim CPU core: 2" 00:08:29.464 } 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3652777 /var/tmp/spdk.sock 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3652777 ']' 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:29.464 10:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.726 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:29.726 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:29.726 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3652807 /var/tmp/spdk2.sock 00:08:29.726 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3652807 ']' 00:08:29.726 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:29.726 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:29.726 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:29.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:29.726 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:29.726 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.988 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:29.988 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:29.988 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:29.988 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:29.988 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:29.988 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:29.988 00:08:29.988 real 0m2.084s 00:08:29.988 user 0m0.852s 00:08:29.988 sys 0m0.156s 00:08:29.988 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.988 10:00:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.988 ************************************ 00:08:29.988 END TEST locking_overlapped_coremask_via_rpc 00:08:29.988 ************************************ 00:08:29.988 10:00:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:29.988 10:00:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3652777 ]] 00:08:29.988 10:00:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3652777 00:08:29.988 10:00:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3652777 ']' 00:08:29.988 10:00:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3652777 00:08:29.988 10:00:33 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:29.988 10:00:33 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:29.988 10:00:33 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3652777 00:08:29.988 10:00:33 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:29.988 10:00:33 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:29.988 10:00:33 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3652777' 00:08:29.988 killing process with pid 3652777 00:08:29.988 10:00:33 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3652777 00:08:29.988 10:00:33 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3652777 00:08:30.249 10:00:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3652807 ]] 00:08:30.249 10:00:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3652807 00:08:30.249 10:00:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3652807 ']' 00:08:30.249 10:00:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3652807 00:08:30.249 10:00:33 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:30.249 10:00:33 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:30.249 10:00:33 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3652807 00:08:30.249 10:00:33 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:30.249 10:00:33 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:30.249 10:00:33 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3652807' 00:08:30.249 killing process with pid 3652807 00:08:30.249 10:00:33 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3652807 00:08:30.249 10:00:33 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3652807 00:08:30.510 10:00:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:30.510 10:00:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:30.510 10:00:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3652777 ]] 00:08:30.510 10:00:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3652777 00:08:30.510 10:00:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3652777 ']' 00:08:30.510 10:00:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3652777 00:08:30.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3652777) - No such process 00:08:30.510 10:00:33 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3652777 is not found' 00:08:30.510 Process with pid 3652777 is not found 00:08:30.510 10:00:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3652807 ]] 00:08:30.510 10:00:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3652807 00:08:30.510 10:00:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3652807 ']' 00:08:30.510 10:00:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3652807 00:08:30.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3652807) - No such process 00:08:30.510 10:00:33 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3652807 is not found' 00:08:30.510 Process with pid 3652807 is not found 00:08:30.510 10:00:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:30.510 00:08:30.510 real 0m15.790s 00:08:30.510 user 0m27.937s 00:08:30.510 sys 0m4.615s 00:08:30.510 10:00:33 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.510 10:00:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:30.510 ************************************ 00:08:30.510 END TEST cpu_locks 00:08:30.510 ************************************ 00:08:30.510 00:08:30.510 real 0m41.450s 00:08:30.510 user 1m21.524s 00:08:30.510 sys 0m7.966s 00:08:30.511 10:00:33 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.511 10:00:33 event -- common/autotest_common.sh@10 -- # set +x 00:08:30.511 ************************************ 00:08:30.511 END TEST event 00:08:30.511 ************************************ 00:08:30.511 10:00:33 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:30.511 10:00:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:30.511 10:00:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.511 10:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:30.511 ************************************ 00:08:30.511 START TEST thread 00:08:30.511 ************************************ 00:08:30.511 10:00:33 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:30.772 * Looking for test storage... 00:08:30.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:30.772 10:00:34 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:30.772 10:00:34 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:30.772 10:00:34 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:30.772 10:00:34 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:30.772 10:00:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.773 10:00:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.773 10:00:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.773 10:00:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.773 10:00:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.773 10:00:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.773 10:00:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.773 10:00:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.773 10:00:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.773 10:00:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.773 10:00:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.773 10:00:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:30.773 10:00:34 thread -- scripts/common.sh@345 -- # : 1 00:08:30.773 10:00:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.773 10:00:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.773 10:00:34 thread -- scripts/common.sh@365 -- # decimal 1 00:08:30.773 10:00:34 thread -- scripts/common.sh@353 -- # local d=1 00:08:30.773 10:00:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.773 10:00:34 thread -- scripts/common.sh@355 -- # echo 1 00:08:30.773 10:00:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.773 10:00:34 thread -- scripts/common.sh@366 -- # decimal 2 00:08:30.773 10:00:34 thread -- scripts/common.sh@353 -- # local d=2 00:08:30.773 10:00:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.773 10:00:34 thread -- scripts/common.sh@355 -- # echo 2 00:08:30.773 10:00:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.773 10:00:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.773 10:00:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.773 10:00:34 thread -- scripts/common.sh@368 -- # return 0 00:08:30.773 10:00:34 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.773 10:00:34 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.773 --rc genhtml_branch_coverage=1 00:08:30.773 --rc genhtml_function_coverage=1 00:08:30.773 --rc genhtml_legend=1 00:08:30.773 --rc geninfo_all_blocks=1 00:08:30.773 --rc geninfo_unexecuted_blocks=1 00:08:30.773 00:08:30.773 ' 00:08:30.773 10:00:34 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.773 --rc genhtml_branch_coverage=1 00:08:30.773 --rc genhtml_function_coverage=1 00:08:30.773 --rc genhtml_legend=1 00:08:30.773 --rc geninfo_all_blocks=1 00:08:30.773 --rc geninfo_unexecuted_blocks=1 00:08:30.773 00:08:30.773 ' 00:08:30.773 10:00:34 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.773 --rc genhtml_branch_coverage=1 00:08:30.773 --rc genhtml_function_coverage=1 00:08:30.773 --rc genhtml_legend=1 00:08:30.773 --rc geninfo_all_blocks=1 00:08:30.773 --rc geninfo_unexecuted_blocks=1 00:08:30.773 00:08:30.773 ' 00:08:30.773 10:00:34 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.773 --rc genhtml_branch_coverage=1 00:08:30.773 --rc genhtml_function_coverage=1 00:08:30.773 --rc genhtml_legend=1 00:08:30.773 --rc geninfo_all_blocks=1 00:08:30.773 --rc geninfo_unexecuted_blocks=1 00:08:30.773 00:08:30.773 ' 00:08:30.773 10:00:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:30.773 10:00:34 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:30.773 10:00:34 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.773 10:00:34 thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.773 ************************************ 00:08:30.773 START TEST thread_poller_perf 00:08:30.773 ************************************ 00:08:30.773 10:00:34 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:30.773 [2024-11-06 10:00:34.214198] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:30.773 [2024-11-06 10:00:34.214301] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653465 ] 00:08:31.034 [2024-11-06 10:00:34.301230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.034 [2024-11-06 10:00:34.343353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.034 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:31.977 [2024-11-06T09:00:35.478Z] ====================================== 00:08:31.977 [2024-11-06T09:00:35.478Z] busy:2412192984 (cyc) 00:08:31.977 [2024-11-06T09:00:35.478Z] total_run_count: 288000 00:08:31.977 [2024-11-06T09:00:35.478Z] tsc_hz: 2400000000 (cyc) 00:08:31.977 [2024-11-06T09:00:35.478Z] ====================================== 00:08:31.977 [2024-11-06T09:00:35.478Z] poller_cost: 8375 (cyc), 3489 (nsec) 00:08:31.977 00:08:31.977 real 0m1.193s 00:08:31.977 user 0m1.110s 00:08:31.977 sys 0m0.079s 00:08:31.977 10:00:35 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:31.977 10:00:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:31.977 ************************************ 00:08:31.977 END TEST thread_poller_perf 00:08:31.977 ************************************ 00:08:31.977 10:00:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:31.977 10:00:35 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:31.977 10:00:35 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:31.977 10:00:35 thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.977 ************************************ 00:08:31.977 START TEST thread_poller_perf 00:08:31.977 ************************************ 00:08:31.977 10:00:35 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:31.977 [2024-11-06 10:00:35.477089] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:31.977 [2024-11-06 10:00:35.477197] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653618 ] 00:08:32.239 [2024-11-06 10:00:35.558635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.239 [2024-11-06 10:00:35.595463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.239 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:33.182 [2024-11-06T09:00:36.683Z] ====================================== 00:08:33.182 [2024-11-06T09:00:36.683Z] busy:2401969588 (cyc) 00:08:33.182 [2024-11-06T09:00:36.683Z] total_run_count: 3807000 00:08:33.182 [2024-11-06T09:00:36.683Z] tsc_hz: 2400000000 (cyc) 00:08:33.182 [2024-11-06T09:00:36.683Z] ====================================== 00:08:33.182 [2024-11-06T09:00:36.683Z] poller_cost: 630 (cyc), 262 (nsec) 00:08:33.182 00:08:33.182 real 0m1.172s 00:08:33.182 user 0m1.096s 00:08:33.182 sys 0m0.072s 00:08:33.182 10:00:36 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.182 10:00:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:33.182 ************************************ 00:08:33.183 END TEST thread_poller_perf 00:08:33.183 ************************************ 00:08:33.183 10:00:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:33.183 00:08:33.183 real 0m2.713s 00:08:33.183 user 0m2.379s 00:08:33.183 sys 0m0.345s 00:08:33.183 10:00:36 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.183 10:00:36 thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.183 ************************************ 00:08:33.183 END TEST thread 00:08:33.183 ************************************ 00:08:33.444 10:00:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:33.444 10:00:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:33.444 10:00:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:33.444 10:00:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.444 10:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:33.444 ************************************ 00:08:33.444 START TEST app_cmdline 00:08:33.444 ************************************ 00:08:33.444 10:00:36 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:33.444 * Looking for test storage... 00:08:33.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:33.444 10:00:36 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:33.444 10:00:36 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:33.444 10:00:36 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:33.444 10:00:36 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.444 10:00:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:33.444 10:00:36 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.444 10:00:36 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.444 --rc genhtml_branch_coverage=1 00:08:33.444 --rc genhtml_function_coverage=1 00:08:33.444 --rc genhtml_legend=1 00:08:33.444 --rc geninfo_all_blocks=1 00:08:33.444 --rc geninfo_unexecuted_blocks=1 00:08:33.444 00:08:33.444 ' 00:08:33.444 10:00:36 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.444 --rc genhtml_branch_coverage=1 00:08:33.444 --rc genhtml_function_coverage=1 00:08:33.444 --rc genhtml_legend=1 00:08:33.444 --rc geninfo_all_blocks=1 00:08:33.444 --rc geninfo_unexecuted_blocks=1 00:08:33.444 00:08:33.444 ' 00:08:33.444 10:00:36 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.444 --rc genhtml_branch_coverage=1 00:08:33.444 --rc genhtml_function_coverage=1 00:08:33.444 --rc genhtml_legend=1 00:08:33.444 --rc geninfo_all_blocks=1 00:08:33.444 --rc geninfo_unexecuted_blocks=1 00:08:33.444 00:08:33.444 ' 00:08:33.444 10:00:36 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.444 --rc genhtml_branch_coverage=1 00:08:33.444 --rc genhtml_function_coverage=1 00:08:33.444 --rc genhtml_legend=1 00:08:33.444 --rc geninfo_all_blocks=1 00:08:33.444 --rc geninfo_unexecuted_blocks=1 00:08:33.444 00:08:33.444 ' 00:08:33.444 10:00:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:33.445 10:00:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3653994 00:08:33.445 10:00:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3653994 00:08:33.445 10:00:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:33.445 10:00:36 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3653994 ']' 00:08:33.445 10:00:36 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.445 10:00:36 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:33.445 10:00:36 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.445 10:00:36 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:33.445 10:00:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:33.706 [2024-11-06 10:00:37.004958] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:33.706 [2024-11-06 10:00:37.005011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653994 ] 00:08:33.706 [2024-11-06 10:00:37.086551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.706 [2024-11-06 10:00:37.123079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.648 10:00:37 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:34.648 10:00:37 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:34.648 10:00:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:34.648 { 00:08:34.649 "version": "SPDK v25.01-pre git sha1 d1c46ed8e", 00:08:34.649 "fields": { 00:08:34.649 "major": 25, 00:08:34.649 "minor": 1, 00:08:34.649 "patch": 0, 00:08:34.649 "suffix": "-pre", 00:08:34.649 "commit": "d1c46ed8e" 00:08:34.649 } 00:08:34.649 } 00:08:34.649 10:00:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:34.649 10:00:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:34.649 10:00:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:34.649 10:00:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:34.649 10:00:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:34.649 10:00:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:34.649 10:00:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:34.649 10:00:37 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.649 10:00:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:34.649 10:00:37 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.649 10:00:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:34.649 10:00:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:34.649 10:00:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:34.649 10:00:38 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:34.649 10:00:38 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:34.649 10:00:38 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.649 10:00:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.649 10:00:38 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.649 10:00:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.649 10:00:38 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.649 10:00:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.649 10:00:38 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.649 10:00:38 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:34.649 10:00:38 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:34.910 request: 00:08:34.910 { 00:08:34.910 "method": "env_dpdk_get_mem_stats", 00:08:34.910 "req_id": 1 00:08:34.910 } 00:08:34.910 Got JSON-RPC error response 00:08:34.910 response: 00:08:34.910 { 00:08:34.910 "code": -32601, 00:08:34.910 "message": "Method not found" 00:08:34.910 } 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.910 10:00:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3653994 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3653994 ']' 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3653994 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3653994 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3653994' 00:08:34.910 killing process with pid 3653994 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@971 -- # kill 3653994 00:08:34.910 10:00:38 app_cmdline -- common/autotest_common.sh@976 -- # wait 3653994 00:08:35.172 00:08:35.172 real 0m1.733s 00:08:35.172 user 0m2.075s 00:08:35.172 sys 0m0.459s 00:08:35.172 10:00:38 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:35.172 10:00:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:35.172 ************************************ 00:08:35.172 END TEST app_cmdline 00:08:35.172 ************************************ 00:08:35.172 10:00:38 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:35.172 10:00:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:35.172 10:00:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.172 10:00:38 -- common/autotest_common.sh@10 -- # set +x 00:08:35.172 ************************************ 00:08:35.172 START TEST version 00:08:35.172 ************************************ 00:08:35.172 10:00:38 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:35.172 * Looking for test storage... 00:08:35.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:35.172 10:00:38 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:35.172 10:00:38 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:35.172 10:00:38 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:35.433 10:00:38 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:35.433 10:00:38 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.433 10:00:38 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.433 10:00:38 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.433 10:00:38 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.433 10:00:38 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.433 10:00:38 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.433 10:00:38 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.433 10:00:38 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.433 10:00:38 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.433 10:00:38 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.433 10:00:38 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.433 10:00:38 version -- scripts/common.sh@344 -- # case "$op" in 00:08:35.433 10:00:38 version -- scripts/common.sh@345 -- # : 1 00:08:35.433 10:00:38 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.433 10:00:38 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.433 10:00:38 version -- scripts/common.sh@365 -- # decimal 1 00:08:35.433 10:00:38 version -- scripts/common.sh@353 -- # local d=1 00:08:35.433 10:00:38 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.433 10:00:38 version -- scripts/common.sh@355 -- # echo 1 00:08:35.433 10:00:38 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.433 10:00:38 version -- scripts/common.sh@366 -- # decimal 2 00:08:35.433 10:00:38 version -- scripts/common.sh@353 -- # local d=2 00:08:35.433 10:00:38 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.433 10:00:38 version -- scripts/common.sh@355 -- # echo 2 00:08:35.433 10:00:38 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.433 10:00:38 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.433 10:00:38 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.433 10:00:38 version -- scripts/common.sh@368 -- # return 0 00:08:35.433 10:00:38 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.433 10:00:38 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:35.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.433 --rc genhtml_branch_coverage=1 00:08:35.433 --rc genhtml_function_coverage=1 00:08:35.433 --rc genhtml_legend=1 00:08:35.433 --rc geninfo_all_blocks=1 00:08:35.433 --rc geninfo_unexecuted_blocks=1 00:08:35.433 00:08:35.433 ' 00:08:35.433 10:00:38 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:35.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.433 --rc genhtml_branch_coverage=1 00:08:35.433 --rc genhtml_function_coverage=1 00:08:35.433 --rc genhtml_legend=1 00:08:35.433 --rc geninfo_all_blocks=1 00:08:35.433 --rc geninfo_unexecuted_blocks=1 00:08:35.433 00:08:35.433 ' 00:08:35.433 10:00:38 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:35.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.434 --rc genhtml_branch_coverage=1 00:08:35.434 --rc genhtml_function_coverage=1 00:08:35.434 --rc genhtml_legend=1 00:08:35.434 --rc geninfo_all_blocks=1 00:08:35.434 --rc geninfo_unexecuted_blocks=1 00:08:35.434 00:08:35.434 ' 00:08:35.434 10:00:38 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:35.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.434 --rc genhtml_branch_coverage=1 00:08:35.434 --rc genhtml_function_coverage=1 00:08:35.434 --rc genhtml_legend=1 00:08:35.434 --rc geninfo_all_blocks=1 00:08:35.434 --rc geninfo_unexecuted_blocks=1 00:08:35.434 00:08:35.434 ' 00:08:35.434 10:00:38 version -- app/version.sh@17 -- # get_header_version major 00:08:35.434 10:00:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:35.434 10:00:38 version -- app/version.sh@14 -- # cut -f2 00:08:35.434 10:00:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.434 10:00:38 version -- app/version.sh@17 -- # major=25 00:08:35.434 10:00:38 version -- app/version.sh@18 -- # get_header_version minor 00:08:35.434 10:00:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:35.434 10:00:38 version -- app/version.sh@14 -- # cut -f2 00:08:35.434 10:00:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.434 10:00:38 version -- app/version.sh@18 -- # minor=1 00:08:35.434 10:00:38 version -- app/version.sh@19 -- # get_header_version patch 00:08:35.434 10:00:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:35.434 10:00:38 version -- app/version.sh@14 -- # cut -f2 00:08:35.434 10:00:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.434 10:00:38 version -- app/version.sh@19 -- # patch=0 00:08:35.434 10:00:38 version -- app/version.sh@20 -- # get_header_version suffix 00:08:35.434 10:00:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:35.434 10:00:38 version -- app/version.sh@14 -- # cut -f2 00:08:35.434 10:00:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.434 10:00:38 version -- app/version.sh@20 -- # suffix=-pre 00:08:35.434 10:00:38 version -- app/version.sh@22 -- # version=25.1 00:08:35.434 10:00:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:35.434 10:00:38 version -- app/version.sh@28 -- # version=25.1rc0 00:08:35.434 10:00:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:35.434 10:00:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:35.434 10:00:38 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:35.434 10:00:38 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:35.434 00:08:35.434 real 0m0.277s 00:08:35.434 user 0m0.162s 00:08:35.434 sys 0m0.162s 00:08:35.434 10:00:38 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:35.434 10:00:38 version -- common/autotest_common.sh@10 -- # set +x 00:08:35.434 ************************************ 00:08:35.434 END TEST version 00:08:35.434 ************************************ 00:08:35.434 10:00:38 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:35.434 10:00:38 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:35.434 10:00:38 -- spdk/autotest.sh@194 -- # uname -s 00:08:35.434 10:00:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:35.434 10:00:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:35.434 10:00:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:35.434 10:00:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:35.434 10:00:38 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:35.434 10:00:38 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:35.434 10:00:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:35.434 10:00:38 -- common/autotest_common.sh@10 -- # set +x 00:08:35.434 10:00:38 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:35.434 10:00:38 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:35.434 10:00:38 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:35.434 10:00:38 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:35.434 10:00:38 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:35.434 10:00:38 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:35.434 10:00:38 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:35.434 10:00:38 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:35.434 10:00:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.434 10:00:38 -- common/autotest_common.sh@10 -- # set +x 00:08:35.696 ************************************ 00:08:35.696 START TEST nvmf_tcp 00:08:35.696 ************************************ 00:08:35.696 10:00:38 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:35.696 * Looking for test storage... 00:08:35.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:35.696 10:00:39 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:35.696 10:00:39 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:35.696 10:00:39 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:35.696 10:00:39 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.696 10:00:39 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:35.696 10:00:39 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.696 10:00:39 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:35.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.696 --rc genhtml_branch_coverage=1 00:08:35.696 --rc genhtml_function_coverage=1 00:08:35.696 --rc genhtml_legend=1 00:08:35.696 --rc geninfo_all_blocks=1 00:08:35.696 --rc geninfo_unexecuted_blocks=1 00:08:35.696 00:08:35.696 ' 00:08:35.696 10:00:39 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:35.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.696 --rc genhtml_branch_coverage=1 00:08:35.696 --rc genhtml_function_coverage=1 00:08:35.696 --rc genhtml_legend=1 00:08:35.696 --rc geninfo_all_blocks=1 00:08:35.696 --rc geninfo_unexecuted_blocks=1 00:08:35.696 00:08:35.696 ' 00:08:35.696 10:00:39 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:35.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.696 --rc genhtml_branch_coverage=1 00:08:35.696 --rc genhtml_function_coverage=1 00:08:35.696 --rc genhtml_legend=1 00:08:35.696 --rc geninfo_all_blocks=1 00:08:35.696 --rc geninfo_unexecuted_blocks=1 00:08:35.696 00:08:35.697 ' 00:08:35.697 10:00:39 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:35.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.697 --rc genhtml_branch_coverage=1 00:08:35.697 --rc genhtml_function_coverage=1 00:08:35.697 --rc genhtml_legend=1 00:08:35.697 --rc geninfo_all_blocks=1 00:08:35.697 --rc geninfo_unexecuted_blocks=1 00:08:35.697 00:08:35.697 ' 00:08:35.697 10:00:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:35.697 10:00:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:35.697 10:00:39 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:35.697 10:00:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:35.697 10:00:39 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.697 10:00:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.958 ************************************ 00:08:35.958 START TEST nvmf_target_core 00:08:35.958 ************************************ 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:35.958 * Looking for test storage... 00:08:35.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:35.958 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:35.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.959 --rc genhtml_branch_coverage=1 00:08:35.959 --rc genhtml_function_coverage=1 00:08:35.959 --rc genhtml_legend=1 00:08:35.959 --rc geninfo_all_blocks=1 00:08:35.959 --rc geninfo_unexecuted_blocks=1 00:08:35.959 00:08:35.959 ' 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:35.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.959 --rc genhtml_branch_coverage=1 00:08:35.959 --rc genhtml_function_coverage=1 00:08:35.959 --rc genhtml_legend=1 00:08:35.959 --rc geninfo_all_blocks=1 00:08:35.959 --rc geninfo_unexecuted_blocks=1 00:08:35.959 00:08:35.959 ' 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:35.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.959 --rc genhtml_branch_coverage=1 00:08:35.959 --rc genhtml_function_coverage=1 00:08:35.959 --rc genhtml_legend=1 00:08:35.959 --rc geninfo_all_blocks=1 00:08:35.959 --rc geninfo_unexecuted_blocks=1 00:08:35.959 00:08:35.959 ' 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:35.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.959 --rc genhtml_branch_coverage=1 00:08:35.959 --rc genhtml_function_coverage=1 00:08:35.959 --rc genhtml_legend=1 00:08:35.959 --rc geninfo_all_blocks=1 00:08:35.959 --rc geninfo_unexecuted_blocks=1 00:08:35.959 00:08:35.959 ' 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.959 10:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.221 ************************************ 00:08:36.221 START TEST nvmf_abort 00:08:36.221 ************************************ 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:36.221 * Looking for test storage... 00:08:36.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:36.221 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:36.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.222 --rc genhtml_branch_coverage=1 00:08:36.222 --rc genhtml_function_coverage=1 00:08:36.222 --rc genhtml_legend=1 00:08:36.222 --rc geninfo_all_blocks=1 00:08:36.222 --rc geninfo_unexecuted_blocks=1 00:08:36.222 00:08:36.222 ' 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:36.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.222 --rc genhtml_branch_coverage=1 00:08:36.222 --rc genhtml_function_coverage=1 00:08:36.222 --rc genhtml_legend=1 00:08:36.222 --rc geninfo_all_blocks=1 00:08:36.222 --rc geninfo_unexecuted_blocks=1 00:08:36.222 00:08:36.222 ' 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:36.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.222 --rc genhtml_branch_coverage=1 00:08:36.222 --rc genhtml_function_coverage=1 00:08:36.222 --rc genhtml_legend=1 00:08:36.222 --rc geninfo_all_blocks=1 00:08:36.222 --rc geninfo_unexecuted_blocks=1 00:08:36.222 00:08:36.222 ' 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:36.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.222 --rc genhtml_branch_coverage=1 00:08:36.222 --rc genhtml_function_coverage=1 00:08:36.222 --rc genhtml_legend=1 00:08:36.222 --rc geninfo_all_blocks=1 00:08:36.222 --rc geninfo_unexecuted_blocks=1 00:08:36.222 00:08:36.222 ' 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.222 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.482 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:36.482 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:36.482 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:36.482 10:00:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:44.622 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:44.622 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.622 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:44.623 Found net devices under 0000:31:00.0: cvl_0_0 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:44.623 Found net devices under 0000:31:00.1: cvl_0_1 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:44.623 10:00:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:44.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:08:44.623 00:08:44.623 --- 10.0.0.2 ping statistics --- 00:08:44.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.623 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:08:44.623 00:08:44.623 --- 10.0.0.1 ping statistics --- 00:08:44.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.623 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3659162 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3659162 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3659162 ']' 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:44.623 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:44.884 [2024-11-06 10:00:48.166103] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:44.884 [2024-11-06 10:00:48.166156] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.884 [2024-11-06 10:00:48.269296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:44.884 [2024-11-06 10:00:48.322168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.884 [2024-11-06 10:00:48.322217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.884 [2024-11-06 10:00:48.322225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.884 [2024-11-06 10:00:48.322232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.884 [2024-11-06 10:00:48.322239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.884 [2024-11-06 10:00:48.324041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.884 [2024-11-06 10:00:48.324211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.884 [2024-11-06 10:00:48.324211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.827 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:45.827 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:08:45.827 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.827 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:45.827 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.827 [2024-11-06 10:00:49.019558] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.827 Malloc0 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.827 Delay0 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.827 [2024-11-06 10:00:49.100996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.827 10:00:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:45.827 [2024-11-06 10:00:49.230252] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:48.374 Initializing NVMe Controllers 00:08:48.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:48.374 controller IO queue size 128 less than required 00:08:48.374 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:48.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:48.374 Initialization complete. Launching workers. 00:08:48.374 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28966 00:08:48.374 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29027, failed to submit 62 00:08:48.374 success 28970, unsuccessful 57, failed 0 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.374 rmmod nvme_tcp 00:08:48.374 rmmod nvme_fabrics 00:08:48.374 rmmod nvme_keyring 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3659162 ']' 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3659162 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3659162 ']' 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3659162 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3659162 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3659162' 00:08:48.374 killing process with pid 3659162 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3659162 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3659162 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.374 10:00:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.290 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.290 00:08:50.290 real 0m14.177s 00:08:50.290 user 0m14.220s 00:08:50.290 sys 0m6.978s 00:08:50.290 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.290 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:50.290 ************************************ 00:08:50.290 END TEST nvmf_abort 00:08:50.290 ************************************ 00:08:50.290 10:00:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:50.290 10:00:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:50.290 10:00:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.290 10:00:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.290 ************************************ 00:08:50.290 START TEST nvmf_ns_hotplug_stress 00:08:50.290 ************************************ 00:08:50.290 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:50.552 * Looking for test storage... 00:08:50.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:50.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.552 --rc genhtml_branch_coverage=1 00:08:50.552 --rc genhtml_function_coverage=1 00:08:50.552 --rc genhtml_legend=1 00:08:50.552 --rc geninfo_all_blocks=1 00:08:50.552 --rc geninfo_unexecuted_blocks=1 00:08:50.552 00:08:50.552 ' 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:50.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.552 --rc genhtml_branch_coverage=1 00:08:50.552 --rc genhtml_function_coverage=1 00:08:50.552 --rc genhtml_legend=1 00:08:50.552 --rc geninfo_all_blocks=1 00:08:50.552 --rc geninfo_unexecuted_blocks=1 00:08:50.552 00:08:50.552 ' 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:50.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.552 --rc genhtml_branch_coverage=1 00:08:50.552 --rc genhtml_function_coverage=1 00:08:50.552 --rc genhtml_legend=1 00:08:50.552 --rc geninfo_all_blocks=1 00:08:50.552 --rc geninfo_unexecuted_blocks=1 00:08:50.552 00:08:50.552 ' 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:50.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.552 --rc genhtml_branch_coverage=1 00:08:50.552 --rc genhtml_function_coverage=1 00:08:50.552 --rc genhtml_legend=1 00:08:50.552 --rc geninfo_all_blocks=1 00:08:50.552 --rc geninfo_unexecuted_blocks=1 00:08:50.552 00:08:50.552 ' 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.552 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.553 10:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:58.708 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:58.708 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:58.708 Found net devices under 0000:31:00.0: cvl_0_0 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:58.708 Found net devices under 0000:31:00.1: cvl_0_1 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:58.708 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.709 10:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.709 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.709 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.709 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:58.709 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:58.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:08:58.970 00:08:58.970 --- 10.0.0.2 ping statistics --- 00:08:58.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.970 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:08:58.970 00:08:58.970 --- 10.0.0.1 ping statistics --- 00:08:58.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.970 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3664565 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3664565 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3664565 ']' 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:58.970 10:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.970 [2024-11-06 10:01:02.363182] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:58.970 [2024-11-06 10:01:02.363249] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.231 [2024-11-06 10:01:02.471991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:59.231 [2024-11-06 10:01:02.522672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.231 [2024-11-06 10:01:02.522725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.231 [2024-11-06 10:01:02.522733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.231 [2024-11-06 10:01:02.522741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.231 [2024-11-06 10:01:02.522747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.231 [2024-11-06 10:01:02.524562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.231 [2024-11-06 10:01:02.524726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.231 [2024-11-06 10:01:02.524726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.802 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:59.802 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:08:59.802 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.802 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.802 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:59.802 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.802 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:59.802 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:00.062 [2024-11-06 10:01:03.372097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.062 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:00.322 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.322 [2024-11-06 10:01:03.741554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.322 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.583 10:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:00.844 Malloc0 00:09:00.844 10:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:00.844 Delay0 00:09:00.844 10:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.104 10:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:01.364 NULL1 00:09:01.364 10:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:01.625 10:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3665077 00:09:01.625 10:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:01.625 10:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:01.625 10:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.566 Read completed with error (sct=0, sc=11) 00:09:02.566 10:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.827 10:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:02.827 10:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:03.088 true 00:09:03.088 10:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:03.088 10:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.028 10:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.028 10:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:04.028 10:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:04.289 true 00:09:04.289 10:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:04.289 10:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.289 10:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.575 10:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:04.575 10:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:04.863 true 00:09:04.863 10:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:04.863 10:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.816 10:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.077 10:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:06.077 10:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:06.077 true 00:09:06.342 10:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:06.342 10:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.283 10:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.283 10:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:07.283 10:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:07.283 true 00:09:07.284 10:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:07.284 10:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.544 10:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.804 10:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:07.804 10:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:07.804 true 00:09:08.065 10:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:08.065 10:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.065 10:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.325 10:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:08.325 10:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:08.584 true 00:09:08.584 10:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:08.584 10:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.584 10:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.844 10:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:08.844 10:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:09.104 true 00:09:09.104 10:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:09.104 10:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.046 10:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.307 10:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:10.307 10:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:10.567 true 00:09:10.567 10:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:10.568 10:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.509 10:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.509 10:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:11.509 10:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:11.770 true 00:09:11.770 10:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:11.770 10:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.711 10:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.711 10:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:12.712 10:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:12.712 true 00:09:12.972 10:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:12.972 10:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.972 10:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.233 10:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:13.233 10:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:13.233 true 00:09:13.493 10:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:13.493 10:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.493 10:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.754 10:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:13.754 10:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:13.754 true 00:09:14.015 10:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:14.015 10:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.015 10:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.275 10:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:14.275 10:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:14.535 true 00:09:14.535 10:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:14.535 10:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.535 10:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.796 10:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:14.796 10:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:15.056 true 00:09:15.056 10:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:15.056 10:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.056 10:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.316 10:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:15.316 10:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:15.577 true 00:09:15.577 10:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:15.577 10:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.577 10:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.838 10:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:15.838 10:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:16.098 true 00:09:16.098 10:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:16.098 10:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.098 10:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.358 10:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:16.358 10:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:16.618 true 00:09:16.618 10:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:16.618 10:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.618 10:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.879 10:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:16.879 10:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:17.140 true 00:09:17.140 10:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:17.140 10:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.140 10:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.401 10:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:17.401 10:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:17.662 true 00:09:17.662 10:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:17.662 10:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.662 10:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.923 10:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:17.923 10:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:18.185 true 00:09:18.185 10:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:18.185 10:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.185 10:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.446 10:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:18.446 10:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:18.707 true 00:09:18.707 10:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:18.707 10:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.707 10:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.968 10:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:18.968 10:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:19.229 true 00:09:19.229 10:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:19.229 10:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.229 10:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.490 10:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:19.490 10:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:19.750 true 00:09:19.750 10:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:19.750 10:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.750 10:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.035 [2024-11-06 10:01:23.389458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.035 [2024-11-06 10:01:23.389520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.035 [2024-11-06 10:01:23.389551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.035 [2024-11-06 10:01:23.389579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.035 [2024-11-06 10:01:23.389605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.035 [2024-11-06 10:01:23.389634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.389663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.389692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.389723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.389754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.389787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.389821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.389847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.389887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.389916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.389958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.389987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.390984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.391371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.392989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.036 [2024-11-06 10:01:23.393355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.393974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.394984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.395976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.037 [2024-11-06 10:01:23.396689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.396715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.396742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.396770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.396798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.396828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.396858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.396891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.396918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.396945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.396975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.397990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.398978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.038 [2024-11-06 10:01:23.399551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.399973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.400969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.401993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.402582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.403307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.403339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.403372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.403404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.403435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.403462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.403492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.039 [2024-11-06 10:01:23.403520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.403974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.404985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.405980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.406009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.406037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.406065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.040 [2024-11-06 10:01:23.406092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.406973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.407985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.408981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.409009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.409038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.409065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.409094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.409124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.409153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.409185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.409216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.409245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.409277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.041 [2024-11-06 10:01:23.409302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.409980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.410986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.411994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.042 [2024-11-06 10:01:23.412460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.412966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.413778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.043 [2024-11-06 10:01:23.414663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.414690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.414717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.414748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.414778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.414810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.414838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.414872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.414906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.414932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.414958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.414991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.415931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.044 [2024-11-06 10:01:23.416783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.416811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.416838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.416870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.416899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.416932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.416960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.416987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.417982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.418985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.419033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.419063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.419109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.419137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.419167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.419195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.419229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.045 [2024-11-06 10:01:23.419257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.419977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 10:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:20.046 [2024-11-06 10:01:23.420701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 [2024-11-06 10:01:23.420800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.046 10:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:20.046 [2024-11-06 10:01:23.420827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.420854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.420882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.420909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.420942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.420972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.421975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.047 [2024-11-06 10:01:23.422889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.422913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.422936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.422959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.422982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.048 [2024-11-06 10:01:23.423577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.423967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.424974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.048 [2024-11-06 10:01:23.425356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.425994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.426992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.427979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.428009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.428038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.428062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.428086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.428120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.428147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.428176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.428201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.428228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.049 [2024-11-06 10:01:23.428257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.428977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.429985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.050 [2024-11-06 10:01:23.430541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.070 [2024-11-06 10:01:23.430894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.430921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.430949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.431974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.432984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.433692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.434178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.434210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.434242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.434272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.434302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.434330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.071 [2024-11-06 10:01:23.434358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.434996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.435995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.436992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.072 [2024-11-06 10:01:23.437708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.437736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.437764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.437793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.437819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.437844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.437879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.437906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.437932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.437961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.437992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.438983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.439987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.440996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.073 [2024-11-06 10:01:23.441611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.441992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.442692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.443993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.444989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.445335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.445391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.445421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.445450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.445477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.074 [2024-11-06 10:01:23.445510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.445970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.446992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.447988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.075 [2024-11-06 10:01:23.448887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.448911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.448935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.448959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.448983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.449985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.450975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.451996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.076 [2024-11-06 10:01:23.452355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.452982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.453699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.454991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.455976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.077 [2024-11-06 10:01:23.456461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.456966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.078 [2024-11-06 10:01:23.457038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.457998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.458990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.459998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.460026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.078 [2024-11-06 10:01:23.460055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.460995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.461988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.462435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.463152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.463184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.463217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.463260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.463290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.463323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.079 [2024-11-06 10:01:23.463352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.463993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.464992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.080 [2024-11-06 10:01:23.465505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.465977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.466991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.467978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.468001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.468024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.468048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.468071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.468095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.468118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.468143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.468167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.468190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.081 [2024-11-06 10:01:23.468214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.468763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.469984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.470956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.471013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.471044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.471090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.471119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.471155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.471185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.471237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.471265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.471293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.082 [2024-11-06 10:01:23.471323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.471778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.472999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.473965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.474002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.474033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.474062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.474093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.474123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.474157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.474198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.474240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.474278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.474316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.083 [2024-11-06 10:01:23.474353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.474392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.474418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.474445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.474477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.474506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.474538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.474566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.474596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.474629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.475990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.476992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.477978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.084 [2024-11-06 10:01:23.478645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.478995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.479998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.480987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.481998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.085 [2024-11-06 10:01:23.482764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.482793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.482824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.482853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.482885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.482912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.482941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.482973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.483939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.484984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.485711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.486980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.487011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.487039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.086 [2024-11-06 10:01:23.487073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.487977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.488996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.489988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.490011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.087 [2024-11-06 10:01:23.490044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.490524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.491989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.492971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 Message suppressed 999 times: [2024-11-06 10:01:23.493419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 Read completed with error (sct=0, sc=15) 00:09:20.088 [2024-11-06 10:01:23.493455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.088 [2024-11-06 10:01:23.493541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.493983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.494981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.495982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.496013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.496043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.496074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.496104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.496133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.496161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.496190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.089 [2024-11-06 10:01:23.496220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.496986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.497999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.498969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.499854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.500239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.500271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.500302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.500330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.500361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.500391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.090 [2024-11-06 10:01:23.500419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.500993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.501990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.502989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.503983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.504987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.505020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.505049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.505101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.091 [2024-11-06 10:01:23.505130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.505975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.506743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.507987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.508963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.509983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.092 [2024-11-06 10:01:23.510283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.510975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.511996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.512966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.513001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.513030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.513057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.513085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.513116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.093 [2024-11-06 10:01:23.513155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.513985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.514989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.384 [2024-11-06 10:01:23.515789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.515815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.515839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.515870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.515900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.515929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.515955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.515985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.516691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.517999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.518980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.519974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.520971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.521003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.521032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.521067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.385 [2024-11-06 10:01:23.521096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.521975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.522980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.523998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.524679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.525990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.526995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.527027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.527058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.386 [2024-11-06 10:01:23.527089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.527988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.528976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.387 [2024-11-06 10:01:23.529174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.529990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.530980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.531992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.532020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.532051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.532079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.532108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.532136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.387 [2024-11-06 10:01:23.532164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.532982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.533871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.534988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.535973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.536972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.388 [2024-11-06 10:01:23.537623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.537656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.537696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.537730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.537754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.537784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.537814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.537841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.538989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.539988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.540982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.541975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.542007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.542034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.542061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.542091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.542120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.542157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.542196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.542223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.542252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.542282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.542310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.543977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.544007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.544044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.544072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.389 [2024-11-06 10:01:23.544104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.544972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.545969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.546981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.547973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.548980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.549004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.549028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.549054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.549078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.549102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.549128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.390 [2024-11-06 10:01:23.549152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.549981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.550984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.551978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.552989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.553657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.554969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.555001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.555030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.555055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.555079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.391 [2024-11-06 10:01:23.555103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.555742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.556885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.557976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.558980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.392 [2024-11-06 10:01:23.559918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.559947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.559976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.560986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.561997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.562999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.393 [2024-11-06 10:01:23.563096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.563986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.564980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.393 [2024-11-06 10:01:23.565568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.565973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.566710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.567997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.568976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.569977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.570984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.394 [2024-11-06 10:01:23.571854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.571890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.571922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.571953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.571981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.572980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.573716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.574997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.575903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.576778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.577058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.577087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.577115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.577145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.577174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.577204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.577230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.577257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.577283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.395 [2024-11-06 10:01:23.577311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.577976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.578990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.579977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.580932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 true 00:09:20.396 [2024-11-06 10:01:23.581798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.581998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.396 [2024-11-06 10:01:23.582949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.582985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.583996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.584934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.585975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.586983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.587714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.397 [2024-11-06 10:01:23.588801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.588830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.588859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.588892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.588931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.588962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.589972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.590984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.591981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.592998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.593033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.398 [2024-11-06 10:01:23.593060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.593997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.594993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 10:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:20.399 [2024-11-06 10:01:23.595615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.595983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 10:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.399 [2024-11-06 10:01:23.596044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.596974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.597973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.399 [2024-11-06 10:01:23.598990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 Message suppressed 999 times: [2024-11-06 10:01:23.599743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 Read completed with error (sct=0, sc=15) 00:09:20.400 [2024-11-06 10:01:23.599774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.599989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.600809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.601974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.602994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.603980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.604008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.604038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.604067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.604098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.604127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.604156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.400 [2024-11-06 10:01:23.604188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.604997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.605462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.606971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.607986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.608993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.609977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.401 [2024-11-06 10:01:23.610969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.610998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.611978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.612898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.613978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.614986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.615978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.402 [2024-11-06 10:01:23.616699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.616726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.616756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.616783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.616813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.616848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.616887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.616920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.616944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.616974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.617984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.618989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.619996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.620996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.621913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.622255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.622297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.622334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.622362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.622389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.622420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.622450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.622476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.403 [2024-11-06 10:01:23.622505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.622969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.623971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.624001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.624030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.624056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.624082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.624110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.624813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.624848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.624884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.624916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.624944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.624972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.625985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.626983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.627994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.404 [2024-11-06 10:01:23.628476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.628845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.629987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.630974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.631989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.632973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.633974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.405 [2024-11-06 10:01:23.634733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.634771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.634804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.634846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.634879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.634909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.634938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.634997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.635795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.406 [2024-11-06 10:01:23.636182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.636990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.637989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.638977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.406 [2024-11-06 10:01:23.639764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.639796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.639827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.639855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.639888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.639917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.639945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.639974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.640979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.641980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.642539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.643975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.644972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.407 [2024-11-06 10:01:23.645753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.645786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.645816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.645851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.645884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.645912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.645939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.645966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.646980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.647995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.648976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.649968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.650973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.651110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.651138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.408 [2024-11-06 10:01:23.651173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.651783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.652989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.653998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.654970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.655578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.656973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.657004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.657032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.657067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.409 [2024-11-06 10:01:23.657095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.657969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.658871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.659993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.660986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.661690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.410 [2024-11-06 10:01:23.662861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.662888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.662912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.662936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.662960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.662984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.663998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.664994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.411 [2024-11-06 10:01:23.665557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.665589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.665621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.665649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.665684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.666975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.667878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.412 [2024-11-06 10:01:23.668479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.668509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.668946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.668978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.669967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.670977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.413 [2024-11-06 10:01:23.671055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.671996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.672030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.672061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.413 [2024-11-06 10:01:23.672090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.672987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.673987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.674977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.414 [2024-11-06 10:01:23.675539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.675979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.676841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.677985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.415 [2024-11-06 10:01:23.678781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.678809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.678837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.678872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.678896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.678925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.678954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.678978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.679978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.680999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.681992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.682017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.682048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.416 [2024-11-06 10:01:23.682078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.682982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.683594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.684975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.685002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.685033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.685064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.417 [2024-11-06 10:01:23.685094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.685717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.686915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.687999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.688031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.688060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.688091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.688296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.688329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.688363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.688392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.688425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.418 [2024-11-06 10:01:23.688455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.688976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.689969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.690991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.691038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.691066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.691104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.691133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.419 [2024-11-06 10:01:23.691162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.691192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.691229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.691255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.691287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.691321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.691349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.691378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.691411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.692981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.693880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.420 [2024-11-06 10:01:23.694356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.694858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.695971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.696999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.421 [2024-11-06 10:01:23.697592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.697980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.698976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.699990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.422 [2024-11-06 10:01:23.700727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.700758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.700945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.700978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.701987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.702884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.703967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.704001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.704031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.704057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.704087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.423 [2024-11-06 10:01:23.704118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.704973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.705988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.706969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.424 [2024-11-06 10:01:23.707416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.707444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.707471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.425 [2024-11-06 10:01:23.707822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.707853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.707894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.707923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.707958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.707987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.708984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.709755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.710998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.711031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.711059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.711090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.711120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.425 [2024-11-06 10:01:23.711152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.711978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.712990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.713999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.714029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.714064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.714100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.714131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.714157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.714191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.714222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.714250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.714280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.714309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.426 [2024-11-06 10:01:23.714337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.714997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.715988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.716979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.427 [2024-11-06 10:01:23.717910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.717942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.718990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.719990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.720981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.721008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.721037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.721067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.721098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.721128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.428 [2024-11-06 10:01:23.721159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.721809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.722994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.723979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.429 [2024-11-06 10:01:23.724402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.724981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.725974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.726983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.727984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.430 [2024-11-06 10:01:23.728028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.728974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.729937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.730654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.730687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.730739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.730769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.730804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.730835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.730871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.730901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.730929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.730961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.730991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.731023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.731054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.731083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.731113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.731142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.731173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.731200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.731232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.731259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.731289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.431 [2024-11-06 10:01:23.731320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.731968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.732972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.733983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.432 [2024-11-06 10:01:23.734624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.734659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.734697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.734733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.734759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.734787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.734819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.735974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.736979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.737976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.738008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.738039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.738071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.738099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.738128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.738159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.738187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.738229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.738268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.738296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.433 [2024-11-06 10:01:23.738324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.738998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.739989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.740986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 [2024-11-06 10:01:23.741620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.434 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.435 [2024-11-06 10:01:23.742307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.742984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.743990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.744991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.745019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.745052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.435 [2024-11-06 10:01:23.745081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.745983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.746976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.747983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.436 [2024-11-06 10:01:23.748496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.437 [2024-11-06 10:01:23.748525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.437 [2024-11-06 10:01:23.748553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.437 [2024-11-06 10:01:23.748582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.437 [2024-11-06 10:01:23.748609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.437 [2024-11-06 10:01:23.748637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.437 [2024-11-06 10:01:23.748671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.437 [2024-11-06 10:01:23.748701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.437 [2024-11-06 10:01:23.748743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.437 10:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.727 [2024-11-06 10:01:23.942478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.942975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.727 [2024-11-06 10:01:23.943917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.943948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.943976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.944942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.945991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.946952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.728 [2024-11-06 10:01:23.947774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.947803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.947835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.947866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.947895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.947924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.947954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.947986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.948995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.949974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.950982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.951010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.951038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.951064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.951096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.951125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.951151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.729 [2024-11-06 10:01:23.951181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.951984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.952960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.953976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.730 [2024-11-06 10:01:23.954378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.954972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.955910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.956975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.731 [2024-11-06 10:01:23.957529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.957978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.958985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.732 [2024-11-06 10:01:23.959989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.960994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.961989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 10:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:20.733 [2024-11-06 10:01:23.962250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.962620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 10:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:20.733 [2024-11-06 10:01:23.962652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.733 [2024-11-06 10:01:23.963377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.963987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.964946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.965980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.734 [2024-11-06 10:01:23.966540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.966990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.967978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.968978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.969983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.970013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.970041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.970070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.970099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.970127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.970155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.970188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.970220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.735 [2024-11-06 10:01:23.970250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.970994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.971797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.972974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.973009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.973041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.973072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.736 [2024-11-06 10:01:23.973103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.973989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.974978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.975998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.737 [2024-11-06 10:01:23.976756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.976788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.976817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.976846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.976881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.976913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.976941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.976971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.738 [2024-11-06 10:01:23.977630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.977972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.978728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.979971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.980000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.980032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.980057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.980085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.980113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.980142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.980180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.738 [2024-11-06 10:01:23.980210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.980975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.981989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.982976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.739 [2024-11-06 10:01:23.983974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.984987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.985664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.986976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.740 [2024-11-06 10:01:23.987420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.987975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.988990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.989988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.741 [2024-11-06 10:01:23.990753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.990782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.990811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.990847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.990882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.990907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.990942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.990968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.990998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.991984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.992599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.993985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.994014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.994039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.994063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.994093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.994122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.994153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.742 [2024-11-06 10:01:23.994184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.994984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.995977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.996996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.743 [2024-11-06 10:01:23.997682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.997710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.997740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.997775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.997804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.997835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.997867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.997897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.997935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.997966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.998968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:23.999980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.000972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.001003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.001031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.001061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.001094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.001123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.001154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.001184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.001215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.001244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.001304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.744 [2024-11-06 10:01:24.001336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.001989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.002987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.003974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.004006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.004037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.004066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.004096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.004128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.004159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.004296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.004324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.745 [2024-11-06 10:01:24.004351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.004775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.005992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.006977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.746 [2024-11-06 10:01:24.007907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.007934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.007966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.008981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.009978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.010973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.747 [2024-11-06 10:01:24.011550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.011584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.011614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.011649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.011678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.011709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.011736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.011771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.011801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.011842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.748 [2024-11-06 10:01:24.012682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.012963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.013970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.014988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.015020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.015056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.015083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.748 [2024-11-06 10:01:24.015111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.015984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.016973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.017978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.749 [2024-11-06 10:01:24.018332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.018368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.018703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.018746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.018775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.018804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.018833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.018881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.018909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.018938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.018964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.018993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.019978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.750 [2024-11-06 10:01:24.020503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.020534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.020564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.020594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.020621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.020653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.020682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.021996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.022920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.023990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.024022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.024051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.751 [2024-11-06 10:01:24.024084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.024976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.025977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.026986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.027014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.027042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.027071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.027100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.752 [2024-11-06 10:01:24.027132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.027992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.028979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.029727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.753 [2024-11-06 10:01:24.030843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.030874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.030904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.030933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.030961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.030989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.031971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.032998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.033995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.034021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.034051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.034080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.754 [2024-11-06 10:01:24.034104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.034129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.034156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.034187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.034215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.034244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.034273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.034303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.034337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.034366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.035986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.036978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.037007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.755 [2024-11-06 10:01:24.037040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.037977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.038935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.039979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.040011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.040041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.040069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.040098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.040131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.040161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.040193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.040222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.040246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.756 [2024-11-06 10:01:24.040276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.040987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.041982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.042983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.757 [2024-11-06 10:01:24.043917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.043955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.043984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.044993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.045984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.046983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.047012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.047043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.047074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.047104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.047134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.047158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.047182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.047207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.758 [2024-11-06 10:01:24.047232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.047887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.759 [2024-11-06 10:01:24.048786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.048979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.049965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.759 [2024-11-06 10:01:24.050631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.050662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.050694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.050723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.050752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.050789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.050816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.050854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.050886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.050919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.050954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.050982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.051983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.052988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.053016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.760 [2024-11-06 10:01:24.053046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.053997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.054996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.055995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.056026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.056052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.056082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.056109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.056143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.056185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.056216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.056241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.761 [2024-11-06 10:01:24.056270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.056954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.057986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.058972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.059002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.059030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.059065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.059096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.059130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.059160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.059190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.059221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.762 [2024-11-06 10:01:24.059859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.059898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.059928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.059958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.059986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.060980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.061999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.763 [2024-11-06 10:01:24.062710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.062743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.062768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.062800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.062830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.062859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.062894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.062924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.062955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.062985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.063844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.064989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.764 [2024-11-06 10:01:24.065689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.065720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.065753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.065785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.065813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.065842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.065873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.065910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.065946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.065976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.066991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.765 [2024-11-06 10:01:24.067679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.067714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.067743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.067773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.067804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.067845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.067877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.067908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.067938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.067971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.068984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.069994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.070026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.070057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.070085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.070115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.070146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.070174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.070203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.070236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.766 [2024-11-06 10:01:24.070272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.070986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.071985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.072829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.073501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.073535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.073565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.073596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.073627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.073661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.767 [2024-11-06 10:01:24.073691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.073722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.073753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.073783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.073812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.073846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.073881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.073910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.073938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.073966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.073999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.074977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.075983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.768 [2024-11-06 10:01:24.076424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.076981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.077524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.078996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.769 [2024-11-06 10:01:24.079615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.079978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.080985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.081977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.082006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.082038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.770 [2024-11-06 10:01:24.082068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.082989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.083959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.084983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 Message suppressed 999 times: [2024-11-06 10:01:24.085074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 Read completed with error (sct=0, sc=15) 00:09:20.771 [2024-11-06 10:01:24.085106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.771 [2024-11-06 10:01:24.085449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.085983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.086847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.087991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.772 [2024-11-06 10:01:24.088468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.088968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.089980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.090975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.091987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.092018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.773 [2024-11-06 10:01:24.092048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.092987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.093782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.094405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.094436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.094467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.094494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.094525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.094567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.774 [2024-11-06 10:01:24.094597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.094627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.094657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.094688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.094719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.094750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.094799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.094830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.094868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.094914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.094944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.094981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.095996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.096993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.775 [2024-11-06 10:01:24.097497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.097995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.098987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.099984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.776 [2024-11-06 10:01:24.100670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.100694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.100719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.100749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.100776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.100809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.101999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.102996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.103972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.104010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.104040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.104074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.104104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.104132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.104165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.777 [2024-11-06 10:01:24.104195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.104995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.105991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.106977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.778 [2024-11-06 10:01:24.107369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.107730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.108967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.109979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.110978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.779 [2024-11-06 10:01:24.111012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.111983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.112975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.780 [2024-11-06 10:01:24.113723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.113748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.113773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.113799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.113830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.113865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.113897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.113927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.113956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.113985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.114973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.115974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.116988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.117018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.117045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.117076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.781 [2024-11-06 10:01:24.117113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.117969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.118969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.782 [2024-11-06 10:01:24.119948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.119977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.783 [2024-11-06 10:01:24.120884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.120994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.121883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.122987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.783 [2024-11-06 10:01:24.123440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 true 00:09:20.784 [2024-11-06 10:01:24.123785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.123989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.124628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.125975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.126974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.127005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.784 [2024-11-06 10:01:24.127044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.127967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.128987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.785 [2024-11-06 10:01:24.129856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.129884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.129916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.129940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.129964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.129988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.130829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.131976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.786 [2024-11-06 10:01:24.132783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.132813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.132844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.132879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.132911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.132940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.132968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.132999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.133976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.134991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.135965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.787 [2024-11-06 10:01:24.136377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.136998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.137680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.138999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.788 [2024-11-06 10:01:24.139549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.139971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.140997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.141980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.142996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.789 [2024-11-06 10:01:24.143444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.143977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.144989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.790 [2024-11-06 10:01:24.145772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.145802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.145832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.145859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.145892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.145926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.145958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.145992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.146805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.147996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.791 [2024-11-06 10:01:24.148874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.148904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.148933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.148964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.148997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.149975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 10:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:20.792 [2024-11-06 10:01:24.150725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 10:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.792 [2024-11-06 10:01:24.150846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.150970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.151995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.792 [2024-11-06 10:01:24.152417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.152973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.153540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.154999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.793 [2024-11-06 10:01:24.155832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.155865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.155900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.155929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.155964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.155992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.156648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.794 [2024-11-06 10:01:24.157149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.157966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.158988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.794 [2024-11-06 10:01:24.159385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.159960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.160743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.795 [2024-11-06 10:01:24.161907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.161931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.161961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.161992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.162912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.163993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.164970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.796 [2024-11-06 10:01:24.165369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.165992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.166986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.167981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.797 [2024-11-06 10:01:24.168694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.168724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.168754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.168782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.168813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.168847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.168878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.168908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.168936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.168973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.169761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.170990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.171992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.172021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.172052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.172112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.172143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.798 [2024-11-06 10:01:24.172175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.172982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.173992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.174989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.175019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.175077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.175107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.175138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.175166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.175195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.175223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.175251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.175282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.175311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.799 [2024-11-06 10:01:24.175342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.175995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.176664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.177990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.800 [2024-11-06 10:01:24.178447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.178972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.179980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.180971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.801 [2024-11-06 10:01:24.181890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.181916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.181949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.181979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.182975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.183989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.184977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.185005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.185037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.185067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.185098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.185126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.802 [2024-11-06 10:01:24.185159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.185751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.186984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.187994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.188058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.188087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.188451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.188481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.188513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.188542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.188573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.188610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.188640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.188670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.803 [2024-11-06 10:01:24.188699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.188729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.188760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.188794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.188823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.188852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.188888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.188917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.188946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.188974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.189981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.190998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.804 [2024-11-06 10:01:24.191440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.191986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.192975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:20.805 [2024-11-06 10:01:24.193658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.193993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.194021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.194050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.194079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.194107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.194134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.194163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.805 [2024-11-06 10:01:24.194188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.194849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.195980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.196997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.197026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.197054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.197083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.197113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.197142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.197713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.197744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.197771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.197799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.197830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.806 [2024-11-06 10:01:24.197859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.197898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.197930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.197967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.197997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:20.807 [2024-11-06 10:01:24.198583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.073 [2024-11-06 10:01:24.198609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.198984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.199979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.200982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.201009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.201039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.201071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.201114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.201147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.074 [2024-11-06 10:01:24.201176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.201741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.202993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.203986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.075 [2024-11-06 10:01:24.204521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.204559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.204599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.204633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.204663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.204796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.204828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.204859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.204892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.204922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.204952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.204981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.205987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.206709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.207111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.207142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.207166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.207200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.207228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.207258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.207287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.207315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.207346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.207374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.076 [2024-11-06 10:01:24.207401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.207978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.208945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.209994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.210028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.210057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.077 [2024-11-06 10:01:24.210085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.210945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.211993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.212975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.213004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.078 [2024-11-06 10:01:24.213034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.213978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.214982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.215011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.215038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.215068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.215111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.079 [2024-11-06 10:01:24.215140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.215733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.216990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.217981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.218015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.218041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.218070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.218099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.218125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.218153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.218181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.218219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.080 [2024-11-06 10:01:24.218255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.218986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.219983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.220602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.081 [2024-11-06 10:01:24.221619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.221993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.222984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.223011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.223037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:21.082 [2024-11-06 10:01:24.223164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:22.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.026 10:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.026 10:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:22.026 10:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:22.287 true 00:09:22.287 10:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:22.287 10:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.258 10:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.258 10:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:23.258 10:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:23.519 true 00:09:23.519 10:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:23.519 10:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.519 10:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.779 10:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:23.779 10:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:24.039 true 00:09:24.039 10:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:24.039 10:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.039 10:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.300 10:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:24.300 10:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:24.561 true 00:09:24.561 10:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:24.561 10:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.561 10:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.822 10:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:24.822 10:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:25.083 true 00:09:25.083 10:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:25.083 10:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.303 10:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.303 10:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:26.303 10:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:26.567 true 00:09:26.567 10:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:26.567 10:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.511 10:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.511 10:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:27.511 10:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:27.772 true 00:09:27.772 10:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:27.772 10:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.772 10:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.033 10:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:28.033 10:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:28.294 true 00:09:28.294 10:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:28.295 10:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.499 10:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.499 10:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:29.499 10:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:29.760 true 00:09:29.760 10:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:29.760 10:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.701 10:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.701 10:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:30.701 10:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:30.962 true 00:09:30.962 10:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:30.962 10:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.962 10:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.222 10:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:31.222 10:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:31.483 true 00:09:31.483 10:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:31.483 10:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.426 Initializing NVMe Controllers 00:09:32.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:32.426 Controller IO queue size 128, less than required. 00:09:32.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:32.426 Controller IO queue size 128, less than required. 00:09:32.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:32.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:32.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:32.426 Initialization complete. Launching workers. 00:09:32.426 ======================================================== 00:09:32.426 Latency(us) 00:09:32.426 Device Information : IOPS MiB/s Average min max 00:09:32.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2423.03 1.18 27517.18 2229.06 1187004.36 00:09:32.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14495.22 7.08 8830.22 1441.35 518342.55 00:09:32.426 ======================================================== 00:09:32.426 Total : 16918.25 8.26 11506.56 1441.35 1187004.36 00:09:32.426 00:09:32.686 10:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.686 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:32.686 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:32.947 true 00:09:32.947 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3665077 00:09:32.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3665077) - No such process 00:09:32.947 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3665077 00:09:32.947 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.207 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:33.207 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:33.207 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:33.207 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:33.207 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:33.207 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:33.468 null0 00:09:33.468 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:33.468 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:33.468 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:33.729 null1 00:09:33.729 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:33.729 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:33.729 10:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:33.729 null2 00:09:33.729 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:33.729 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:33.729 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:34.065 null3 00:09:34.065 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:34.065 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:34.066 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:34.066 null4 00:09:34.066 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:34.066 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:34.066 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:34.340 null5 00:09:34.340 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:34.340 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:34.340 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:34.602 null6 00:09:34.602 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:34.602 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:34.602 10:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:34.602 null7 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3671770 3671772 3671774 3671775 3671777 3671779 3671781 3671783 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.602 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:34.863 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:34.863 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.863 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:34.863 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:34.863 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:34.863 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:34.863 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:34.863 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:35.123 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.124 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.124 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:35.124 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.124 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.385 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:35.645 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.645 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.645 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:35.645 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.645 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.645 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:35.645 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.645 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.646 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:35.646 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.646 10:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:35.646 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:35.646 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:35.646 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:35.646 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:35.646 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:35.646 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:35.646 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.646 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.646 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:35.906 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.906 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:35.907 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.168 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.429 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:36.690 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.690 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.690 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:36.690 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.690 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.690 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:36.690 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.690 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.690 10:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:36.690 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.690 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.690 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:36.690 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:36.690 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:36.690 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:36.690 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.690 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:36.690 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:36.690 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:36.950 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.950 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.950 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.950 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.950 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:36.951 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:37.211 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.212 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.212 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:37.471 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.471 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.471 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:37.471 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:37.471 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:37.471 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.472 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:37.472 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.472 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:37.472 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:37.472 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:37.732 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.732 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.732 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:37.732 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.732 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.732 10:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.732 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:37.993 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:38.254 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:38.254 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:38.254 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:38.254 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:38.254 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.254 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:38.254 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:38.254 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.254 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.254 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.254 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.514 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.514 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.515 rmmod nvme_tcp 00:09:38.515 rmmod nvme_fabrics 00:09:38.515 rmmod nvme_keyring 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3664565 ']' 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3664565 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3664565 ']' 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3664565 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3664565 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3664565' 00:09:38.515 killing process with pid 3664565 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3664565 00:09:38.515 10:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3664565 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.775 10:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.689 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:40.689 00:09:40.689 real 0m50.378s 00:09:40.689 user 3m14.483s 00:09:40.689 sys 0m16.459s 00:09:40.689 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:40.689 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.689 ************************************ 00:09:40.689 END TEST nvmf_ns_hotplug_stress 00:09:40.689 ************************************ 00:09:40.689 10:01:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:40.689 10:01:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:40.689 10:01:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:40.689 10:01:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.689 ************************************ 00:09:40.689 START TEST nvmf_delete_subsystem 00:09:40.689 ************************************ 00:09:40.689 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:40.950 * Looking for test storage... 00:09:40.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.950 --rc genhtml_branch_coverage=1 00:09:40.950 --rc genhtml_function_coverage=1 00:09:40.950 --rc genhtml_legend=1 00:09:40.950 --rc geninfo_all_blocks=1 00:09:40.950 --rc geninfo_unexecuted_blocks=1 00:09:40.950 00:09:40.950 ' 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.950 --rc genhtml_branch_coverage=1 00:09:40.950 --rc genhtml_function_coverage=1 00:09:40.950 --rc genhtml_legend=1 00:09:40.950 --rc geninfo_all_blocks=1 00:09:40.950 --rc geninfo_unexecuted_blocks=1 00:09:40.950 00:09:40.950 ' 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.950 --rc genhtml_branch_coverage=1 00:09:40.950 --rc genhtml_function_coverage=1 00:09:40.950 --rc genhtml_legend=1 00:09:40.950 --rc geninfo_all_blocks=1 00:09:40.950 --rc geninfo_unexecuted_blocks=1 00:09:40.950 00:09:40.950 ' 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.950 --rc genhtml_branch_coverage=1 00:09:40.950 --rc genhtml_function_coverage=1 00:09:40.950 --rc genhtml_legend=1 00:09:40.950 --rc geninfo_all_blocks=1 00:09:40.950 --rc geninfo_unexecuted_blocks=1 00:09:40.950 00:09:40.950 ' 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.950 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:40.951 10:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:49.090 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.090 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:49.091 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:49.091 Found net devices under 0000:31:00.0: cvl_0_0 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:49.091 Found net devices under 0000:31:00.1: cvl_0_1 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:09:49.091 00:09:49.091 --- 10.0.0.2 ping statistics --- 00:09:49.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.091 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:09:49.091 00:09:49.091 --- 10.0.0.1 ping statistics --- 00:09:49.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.091 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3677629 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3677629 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3677629 ']' 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:49.091 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.092 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:49.092 10:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:49.092 [2024-11-06 10:01:52.544172] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:49.092 [2024-11-06 10:01:52.544231] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.351 [2024-11-06 10:01:52.633911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:49.351 [2024-11-06 10:01:52.674261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.351 [2024-11-06 10:01:52.674297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.351 [2024-11-06 10:01:52.674305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.351 [2024-11-06 10:01:52.674312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.351 [2024-11-06 10:01:52.674318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.351 [2024-11-06 10:01:52.675558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.351 [2024-11-06 10:01:52.675560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:49.921 [2024-11-06 10:01:53.401129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.921 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:50.182 [2024-11-06 10:01:53.425319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:50.182 NULL1 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:50.182 Delay0 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3677669 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:50.182 10:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:50.182 [2024-11-06 10:01:53.522168] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:52.095 10:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.095 10:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.095 10:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:52.355 Write completed with error (sct=0, sc=8) 00:09:52.355 Read completed with error (sct=0, sc=8) 00:09:52.355 Read completed with error (sct=0, sc=8) 00:09:52.355 Read completed with error (sct=0, sc=8) 00:09:52.355 starting I/O failed: -6 00:09:52.355 Write completed with error (sct=0, sc=8) 00:09:52.355 Write completed with error (sct=0, sc=8) 00:09:52.355 Read completed with error (sct=0, sc=8) 00:09:52.355 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 [2024-11-06 10:01:55.646686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5512c0 is same with the state(6) to be set 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 starting I/O failed: -6 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 [2024-11-06 10:01:55.650031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f118c000c40 is same with the state(6) to be set 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Write completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.356 Read completed with error (sct=0, sc=8) 00:09:52.357 Read completed with error (sct=0, sc=8) 00:09:53.298 [2024-11-06 10:01:56.619879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5525e0 is same with the state(6) to be set 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 [2024-11-06 10:01:56.650027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5510e0 is same with the state(6) to be set 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 [2024-11-06 10:01:56.650679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5514a0 is same with the state(6) to be set 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 [2024-11-06 10:01:56.652532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f118c00d7e0 is same with the state(6) to be set 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Write completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 Read completed with error (sct=0, sc=8) 00:09:53.298 [2024-11-06 10:01:56.652604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f118c00d020 is same with the state(6) to be set 00:09:53.298 Initializing NVMe Controllers 00:09:53.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.298 Controller IO queue size 128, less than required. 00:09:53.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:53.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:53.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:53.298 Initialization complete. Launching workers. 00:09:53.298 ======================================================== 00:09:53.298 Latency(us) 00:09:53.298 Device Information : IOPS MiB/s Average min max 00:09:53.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.28 0.08 887638.74 227.45 1006972.99 00:09:53.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.85 0.08 959671.08 280.77 2002277.34 00:09:53.298 ======================================================== 00:09:53.299 Total : 328.13 0.16 921851.37 227.45 2002277.34 00:09:53.299 00:09:53.299 [2024-11-06 10:01:56.653181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5525e0 (9): Bad file descriptor 00:09:53.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:53.299 10:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.299 10:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:53.299 10:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3677669 00:09:53.299 10:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:53.869 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:53.869 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3677669 00:09:53.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3677669) - No such process 00:09:53.869 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3677669 00:09:53.869 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:53.869 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3677669 00:09:53.869 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:53.869 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:53.869 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3677669 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.870 [2024-11-06 10:01:57.185841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3678516 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3678516 00:09:53.870 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:53.870 [2024-11-06 10:01:57.263313] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:54.440 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:54.440 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3678516 00:09:54.440 10:01:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:55.010 10:01:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:55.010 10:01:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3678516 00:09:55.010 10:01:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:55.270 10:01:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:55.270 10:01:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3678516 00:09:55.270 10:01:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:55.840 10:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:55.840 10:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3678516 00:09:55.840 10:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:56.409 10:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:56.409 10:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3678516 00:09:56.409 10:01:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:56.979 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:56.979 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3678516 00:09:56.979 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:57.240 Initializing NVMe Controllers 00:09:57.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:57.240 Controller IO queue size 128, less than required. 00:09:57.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:57.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:57.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:57.240 Initialization complete. Launching workers. 00:09:57.240 ======================================================== 00:09:57.240 Latency(us) 00:09:57.240 Device Information : IOPS MiB/s Average min max 00:09:57.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002070.36 1000160.78 1008492.50 00:09:57.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002785.35 1000205.27 1008849.68 00:09:57.240 ======================================================== 00:09:57.240 Total : 256.00 0.12 1002427.86 1000160.78 1008849.68 00:09:57.240 00:09:57.240 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:57.240 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3678516 00:09:57.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3678516) - No such process 00:09:57.240 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3678516 00:09:57.240 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:57.240 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.500 rmmod nvme_tcp 00:09:57.500 rmmod nvme_fabrics 00:09:57.500 rmmod nvme_keyring 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3677629 ']' 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3677629 00:09:57.500 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3677629 ']' 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3677629 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3677629 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3677629' 00:09:57.501 killing process with pid 3677629 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3677629 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3677629 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.501 10:02:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.501 10:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:57.761 10:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:09:57.761 10:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.761 10:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.761 10:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.761 10:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.761 10:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.761 10:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.761 10:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.671 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.671 00:09:59.671 real 0m18.910s 00:09:59.671 user 0m30.881s 00:09:59.671 sys 0m7.243s 00:09:59.671 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.671 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.671 ************************************ 00:09:59.671 END TEST nvmf_delete_subsystem 00:09:59.672 ************************************ 00:09:59.672 10:02:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:59.672 10:02:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:59.672 10:02:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.672 10:02:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.672 ************************************ 00:09:59.672 START TEST nvmf_host_management 00:09:59.672 ************************************ 00:09:59.672 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:59.933 * Looking for test storage... 00:09:59.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.933 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:59.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.933 --rc genhtml_branch_coverage=1 00:09:59.933 --rc genhtml_function_coverage=1 00:09:59.934 --rc genhtml_legend=1 00:09:59.934 --rc geninfo_all_blocks=1 00:09:59.934 --rc geninfo_unexecuted_blocks=1 00:09:59.934 00:09:59.934 ' 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:59.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.934 --rc genhtml_branch_coverage=1 00:09:59.934 --rc genhtml_function_coverage=1 00:09:59.934 --rc genhtml_legend=1 00:09:59.934 --rc geninfo_all_blocks=1 00:09:59.934 --rc geninfo_unexecuted_blocks=1 00:09:59.934 00:09:59.934 ' 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:59.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.934 --rc genhtml_branch_coverage=1 00:09:59.934 --rc genhtml_function_coverage=1 00:09:59.934 --rc genhtml_legend=1 00:09:59.934 --rc geninfo_all_blocks=1 00:09:59.934 --rc geninfo_unexecuted_blocks=1 00:09:59.934 00:09:59.934 ' 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:59.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.934 --rc genhtml_branch_coverage=1 00:09:59.934 --rc genhtml_function_coverage=1 00:09:59.934 --rc genhtml_legend=1 00:09:59.934 --rc geninfo_all_blocks=1 00:09:59.934 --rc geninfo_unexecuted_blocks=1 00:09:59.934 00:09:59.934 ' 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:59.934 10:02:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.941 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:09.942 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:09.942 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:09.942 Found net devices under 0000:31:00.0: cvl_0_0 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:09.942 Found net devices under 0000:31:00.1: cvl_0_1 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:09.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:10:09.942 00:10:09.942 --- 10.0.0.2 ping statistics --- 00:10:09.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.942 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:10:09.942 00:10:09.942 --- 10.0.0.1 ping statistics --- 00:10:09.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.942 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3684051 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3684051 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3684051 ']' 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.942 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:09.943 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.943 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:09.943 10:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.943 [2024-11-06 10:02:12.036515] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:09.943 [2024-11-06 10:02:12.036562] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.943 [2024-11-06 10:02:12.145511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.943 [2024-11-06 10:02:12.182102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.943 [2024-11-06 10:02:12.182135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.943 [2024-11-06 10:02:12.182144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.943 [2024-11-06 10:02:12.182151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.943 [2024-11-06 10:02:12.182157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.943 [2024-11-06 10:02:12.183621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.943 [2024-11-06 10:02:12.183809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.943 [2024-11-06 10:02:12.183929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.943 [2024-11-06 10:02:12.183930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.943 [2024-11-06 10:02:12.925910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.943 10:02:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.943 Malloc0 00:10:09.943 [2024-11-06 10:02:13.006700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3684421 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3684421 /var/tmp/bdevperf.sock 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3684421 ']' 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:09.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:09.943 { 00:10:09.943 "params": { 00:10:09.943 "name": "Nvme$subsystem", 00:10:09.943 "trtype": "$TEST_TRANSPORT", 00:10:09.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.943 "adrfam": "ipv4", 00:10:09.943 "trsvcid": "$NVMF_PORT", 00:10:09.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.943 "hdgst": ${hdgst:-false}, 00:10:09.943 "ddgst": ${ddgst:-false} 00:10:09.943 }, 00:10:09.943 "method": "bdev_nvme_attach_controller" 00:10:09.943 } 00:10:09.943 EOF 00:10:09.943 )") 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:09.943 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:09.943 "params": { 00:10:09.943 "name": "Nvme0", 00:10:09.943 "trtype": "tcp", 00:10:09.943 "traddr": "10.0.0.2", 00:10:09.943 "adrfam": "ipv4", 00:10:09.943 "trsvcid": "4420", 00:10:09.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:09.943 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:09.943 "hdgst": false, 00:10:09.943 "ddgst": false 00:10:09.943 }, 00:10:09.943 "method": "bdev_nvme_attach_controller" 00:10:09.943 }' 00:10:09.943 [2024-11-06 10:02:13.110188] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:09.943 [2024-11-06 10:02:13.110240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684421 ] 00:10:09.943 [2024-11-06 10:02:13.188232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.943 [2024-11-06 10:02:13.224812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.943 Running I/O for 10 seconds... 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.517 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.517 [2024-11-06 10:02:13.989876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585530 is same with the state(6) to be set 00:10:10.517 [2024-11-06 10:02:13.990471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.517 [2024-11-06 10:02:13.990812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.517 [2024-11-06 10:02:13.990822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.990829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.990838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.990846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.990855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.990869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.990879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.990887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.990896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.990903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.990913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.990920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.990930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.990937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.990946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.990953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.990963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.990970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.990979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.990987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.990996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.518 [2024-11-06 10:02:13.991486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.518 [2024-11-06 10:02:13.991493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.991502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.519 [2024-11-06 10:02:13.991509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.991519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.519 [2024-11-06 10:02:13.991526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.991536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.519 [2024-11-06 10:02:13.991543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.991553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.519 [2024-11-06 10:02:13.991560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.991570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.519 [2024-11-06 10:02:13.991577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.991587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.519 [2024-11-06 10:02:13.991594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.991602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ee370 is same with the state(6) to be set 00:10:10.519 [2024-11-06 10:02:13.991680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.519 [2024-11-06 10:02:13.991692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.991700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.519 [2024-11-06 10:02:13.991707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.991715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.519 [2024-11-06 10:02:13.991725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.991734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.519 [2024-11-06 10:02:13.991741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.991748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ddb00 is same with the state(6) to be set 00:10:10.519 [2024-11-06 10:02:13.992958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:10.519 task offset: 113408 on job bdev=Nvme0n1 fails 00:10:10.519 00:10:10.519 Latency(us) 00:10:10.519 [2024-11-06T09:02:14.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.519 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:10.519 Job: Nvme0n1 ended in about 0.58 seconds with error 00:10:10.519 Verification LBA range: start 0x0 length 0x400 00:10:10.519 Nvme0n1 : 0.58 1438.06 89.88 110.62 0.00 40361.97 2266.45 33860.27 00:10:10.519 [2024-11-06T09:02:14.020Z] =================================================================================================================== 00:10:10.519 [2024-11-06T09:02:14.020Z] Total : 1438.06 89.88 110.62 0.00 40361.97 2266.45 33860.27 00:10:10.519 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.519 [2024-11-06 10:02:13.995015] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:10.519 [2024-11-06 10:02:13.995038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ddb00 (9): Bad file descriptor 00:10:10.519 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:10.519 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.519 10:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.519 [2024-11-06 10:02:13.999664] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:10:10.519 [2024-11-06 10:02:13.999741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:10:10.519 [2024-11-06 10:02:13.999761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.519 [2024-11-06 10:02:13.999773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:10:10.519 [2024-11-06 10:02:13.999781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:10:10.519 [2024-11-06 10:02:13.999788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:10:10.519 [2024-11-06 10:02:13.999795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24ddb00 00:10:10.519 [2024-11-06 10:02:13.999814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ddb00 (9): Bad file descriptor 00:10:10.519 [2024-11-06 10:02:13.999826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:10:10.519 [2024-11-06 10:02:13.999833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:10:10.519 [2024-11-06 10:02:13.999842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:10:10.519 [2024-11-06 10:02:13.999851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:10:10.519 10:02:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.519 10:02:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3684421 00:10:11.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3684421) - No such process 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:11.904 { 00:10:11.904 "params": { 00:10:11.904 "name": "Nvme$subsystem", 00:10:11.904 "trtype": "$TEST_TRANSPORT", 00:10:11.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.904 "adrfam": "ipv4", 00:10:11.904 "trsvcid": "$NVMF_PORT", 00:10:11.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.904 "hdgst": ${hdgst:-false}, 00:10:11.904 "ddgst": ${ddgst:-false} 00:10:11.904 }, 00:10:11.904 "method": "bdev_nvme_attach_controller" 00:10:11.904 } 00:10:11.904 EOF 00:10:11.904 )") 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:11.904 10:02:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:11.904 "params": { 00:10:11.904 "name": "Nvme0", 00:10:11.904 "trtype": "tcp", 00:10:11.904 "traddr": "10.0.0.2", 00:10:11.904 "adrfam": "ipv4", 00:10:11.904 "trsvcid": "4420", 00:10:11.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:11.904 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:11.904 "hdgst": false, 00:10:11.904 "ddgst": false 00:10:11.904 }, 00:10:11.904 "method": "bdev_nvme_attach_controller" 00:10:11.904 }' 00:10:11.904 [2024-11-06 10:02:15.066400] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:11.904 [2024-11-06 10:02:15.066455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684779 ] 00:10:11.904 [2024-11-06 10:02:15.145191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.904 [2024-11-06 10:02:15.180525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.166 Running I/O for 1 seconds... 00:10:13.107 1598.00 IOPS, 99.88 MiB/s 00:10:13.107 Latency(us) 00:10:13.107 [2024-11-06T09:02:16.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.107 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:13.107 Verification LBA range: start 0x0 length 0x400 00:10:13.107 Nvme0n1 : 1.03 1618.09 101.13 0.00 0.00 38870.38 7154.35 33641.81 00:10:13.107 [2024-11-06T09:02:16.608Z] =================================================================================================================== 00:10:13.107 [2024-11-06T09:02:16.608Z] Total : 1618.09 101.13 0.00 0.00 38870.38 7154.35 33641.81 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.367 rmmod nvme_tcp 00:10:13.367 rmmod nvme_fabrics 00:10:13.367 rmmod nvme_keyring 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3684051 ']' 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3684051 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3684051 ']' 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3684051 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3684051 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3684051' 00:10:13.367 killing process with pid 3684051 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3684051 00:10:13.367 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3684051 00:10:13.628 [2024-11-06 10:02:16.896427] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.628 10:02:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.542 10:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.542 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:15.542 00:10:15.542 real 0m15.834s 00:10:15.542 user 0m24.089s 00:10:15.542 sys 0m7.415s 00:10:15.542 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.542 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:15.542 ************************************ 00:10:15.542 END TEST nvmf_host_management 00:10:15.542 ************************************ 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.803 ************************************ 00:10:15.803 START TEST nvmf_lvol 00:10:15.803 ************************************ 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:15.803 * Looking for test storage... 00:10:15.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:15.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.803 --rc genhtml_branch_coverage=1 00:10:15.803 --rc genhtml_function_coverage=1 00:10:15.803 --rc genhtml_legend=1 00:10:15.803 --rc geninfo_all_blocks=1 00:10:15.803 --rc geninfo_unexecuted_blocks=1 00:10:15.803 00:10:15.803 ' 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:15.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.803 --rc genhtml_branch_coverage=1 00:10:15.803 --rc genhtml_function_coverage=1 00:10:15.803 --rc genhtml_legend=1 00:10:15.803 --rc geninfo_all_blocks=1 00:10:15.803 --rc geninfo_unexecuted_blocks=1 00:10:15.803 00:10:15.803 ' 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:15.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.803 --rc genhtml_branch_coverage=1 00:10:15.803 --rc genhtml_function_coverage=1 00:10:15.803 --rc genhtml_legend=1 00:10:15.803 --rc geninfo_all_blocks=1 00:10:15.803 --rc geninfo_unexecuted_blocks=1 00:10:15.803 00:10:15.803 ' 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:15.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.803 --rc genhtml_branch_coverage=1 00:10:15.803 --rc genhtml_function_coverage=1 00:10:15.803 --rc genhtml_legend=1 00:10:15.803 --rc geninfo_all_blocks=1 00:10:15.803 --rc geninfo_unexecuted_blocks=1 00:10:15.803 00:10:15.803 ' 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.803 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.064 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:16.065 10:02:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:24.245 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:24.245 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:24.245 Found net devices under 0000:31:00.0: cvl_0_0 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:24.245 Found net devices under 0000:31:00.1: cvl_0_1 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.245 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:24.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:10:24.246 00:10:24.246 --- 10.0.0.2 ping statistics --- 00:10:24.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.246 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:24.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:10:24.246 00:10:24.246 --- 10.0.0.1 ping statistics --- 00:10:24.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.246 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3689816 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3689816 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3689816 ']' 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:24.246 10:02:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:24.246 [2024-11-06 10:02:27.462299] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:24.246 [2024-11-06 10:02:27.462362] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.246 [2024-11-06 10:02:27.552329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:24.246 [2024-11-06 10:02:27.593549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.246 [2024-11-06 10:02:27.593586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.246 [2024-11-06 10:02:27.593594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.246 [2024-11-06 10:02:27.593601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.246 [2024-11-06 10:02:27.593607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.246 [2024-11-06 10:02:27.595122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.246 [2024-11-06 10:02:27.595295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.246 [2024-11-06 10:02:27.595300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.818 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:24.818 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:10:24.818 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:24.818 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:24.818 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:24.818 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.818 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:25.079 [2024-11-06 10:02:28.466010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.079 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.340 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:25.340 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.602 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:25.602 10:02:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:25.602 10:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:25.863 10:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=94e20d36-cf9a-4cab-9148-9ff55c6c3031 00:10:25.863 10:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94e20d36-cf9a-4cab-9148-9ff55c6c3031 lvol 20 00:10:26.123 10:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5f3f3fe0-5611-4c5d-84c5-7d665e0515c1 00:10:26.123 10:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:26.383 10:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f3f3fe0-5611-4c5d-84c5-7d665e0515c1 00:10:26.383 10:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:26.641 [2024-11-06 10:02:29.995912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.641 10:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:26.901 10:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3690519 00:10:26.901 10:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:26.901 10:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:27.843 10:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5f3f3fe0-5611-4c5d-84c5-7d665e0515c1 MY_SNAPSHOT 00:10:28.103 10:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c1eab149-3282-4115-8379-fc8617d2901e 00:10:28.103 10:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5f3f3fe0-5611-4c5d-84c5-7d665e0515c1 30 00:10:28.365 10:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c1eab149-3282-4115-8379-fc8617d2901e MY_CLONE 00:10:28.625 10:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f2a2b2d7-d317-47cd-aa7d-a67380400fbb 00:10:28.625 10:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f2a2b2d7-d317-47cd-aa7d-a67380400fbb 00:10:29.196 10:02:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3690519 00:10:37.330 Initializing NVMe Controllers 00:10:37.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:37.330 Controller IO queue size 128, less than required. 00:10:37.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:37.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:37.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:37.330 Initialization complete. Launching workers. 00:10:37.330 ======================================================== 00:10:37.331 Latency(us) 00:10:37.331 Device Information : IOPS MiB/s Average min max 00:10:37.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12144.60 47.44 10541.87 1491.41 62412.05 00:10:37.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17629.30 68.86 7262.78 361.26 56904.17 00:10:37.331 ======================================================== 00:10:37.331 Total : 29773.90 116.30 8600.30 361.26 62412.05 00:10:37.331 00:10:37.331 10:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:37.331 10:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5f3f3fe0-5611-4c5d-84c5-7d665e0515c1 00:10:37.591 10:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 94e20d36-cf9a-4cab-9148-9ff55c6c3031 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.852 rmmod nvme_tcp 00:10:37.852 rmmod nvme_fabrics 00:10:37.852 rmmod nvme_keyring 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3689816 ']' 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3689816 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3689816 ']' 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3689816 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3689816 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3689816' 00:10:37.852 killing process with pid 3689816 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3689816 00:10:37.852 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3689816 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.113 10:02:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.658 00:10:40.658 real 0m24.452s 00:10:40.658 user 1m4.690s 00:10:40.658 sys 0m9.064s 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:40.658 ************************************ 00:10:40.658 END TEST nvmf_lvol 00:10:40.658 ************************************ 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.658 ************************************ 00:10:40.658 START TEST nvmf_lvs_grow 00:10:40.658 ************************************ 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:40.658 * Looking for test storage... 00:10:40.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.658 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:40.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.659 --rc genhtml_branch_coverage=1 00:10:40.659 --rc genhtml_function_coverage=1 00:10:40.659 --rc genhtml_legend=1 00:10:40.659 --rc geninfo_all_blocks=1 00:10:40.659 --rc geninfo_unexecuted_blocks=1 00:10:40.659 00:10:40.659 ' 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:40.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.659 --rc genhtml_branch_coverage=1 00:10:40.659 --rc genhtml_function_coverage=1 00:10:40.659 --rc genhtml_legend=1 00:10:40.659 --rc geninfo_all_blocks=1 00:10:40.659 --rc geninfo_unexecuted_blocks=1 00:10:40.659 00:10:40.659 ' 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:40.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.659 --rc genhtml_branch_coverage=1 00:10:40.659 --rc genhtml_function_coverage=1 00:10:40.659 --rc genhtml_legend=1 00:10:40.659 --rc geninfo_all_blocks=1 00:10:40.659 --rc geninfo_unexecuted_blocks=1 00:10:40.659 00:10:40.659 ' 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:40.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.659 --rc genhtml_branch_coverage=1 00:10:40.659 --rc genhtml_function_coverage=1 00:10:40.659 --rc genhtml_legend=1 00:10:40.659 --rc geninfo_all_blocks=1 00:10:40.659 --rc geninfo_unexecuted_blocks=1 00:10:40.659 00:10:40.659 ' 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.659 10:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.901 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:48.902 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:48.902 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:48.902 Found net devices under 0000:31:00.0: cvl_0_0 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:48.902 Found net devices under 0000:31:00.1: cvl_0_1 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.902 10:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:10:48.902 00:10:48.902 --- 10.0.0.2 ping statistics --- 00:10:48.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.902 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:10:48.902 00:10:48.902 --- 10.0.0.1 ping statistics --- 00:10:48.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.902 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3697532 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3697532 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3697532 ']' 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:48.902 10:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:48.902 [2024-11-06 10:02:52.355724] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:48.902 [2024-11-06 10:02:52.355788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.163 [2024-11-06 10:02:52.449630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.163 [2024-11-06 10:02:52.489604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.163 [2024-11-06 10:02:52.489641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.163 [2024-11-06 10:02:52.489649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.163 [2024-11-06 10:02:52.489656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.163 [2024-11-06 10:02:52.489662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.163 [2024-11-06 10:02:52.490275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.733 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:49.733 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:10:49.733 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:49.733 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.733 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:49.733 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.733 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:49.993 [2024-11-06 10:02:53.364028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.993 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:49.993 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:49.993 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:49.993 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:49.993 ************************************ 00:10:49.993 START TEST lvs_grow_clean 00:10:49.993 ************************************ 00:10:49.993 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:10:49.993 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:49.993 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:49.993 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:49.993 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:49.994 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:49.994 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:49.994 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:49.994 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:49.994 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:50.254 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:50.254 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:50.514 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:10:50.514 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:10:50.514 10:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:50.514 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:50.514 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:50.514 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 lvol 150 00:10:50.774 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f1cfed6b-79e1-4d27-b19c-cff90a56ef15 00:10:50.774 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:50.774 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:51.034 [2024-11-06 10:02:54.331072] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:51.034 [2024-11-06 10:02:54.331127] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:51.034 true 00:10:51.034 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:10:51.034 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:51.034 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:51.034 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:51.294 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f1cfed6b-79e1-4d27-b19c-cff90a56ef15 00:10:51.554 10:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:51.554 [2024-11-06 10:02:55.009132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.554 10:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:51.814 10:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3697991 00:10:51.814 10:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:51.814 10:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:51.814 10:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3697991 /var/tmp/bdevperf.sock 00:10:51.814 10:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3697991 ']' 00:10:51.814 10:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:51.814 10:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:51.814 10:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:51.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:51.814 10:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:51.814 10:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:51.814 [2024-11-06 10:02:55.243387] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:51.814 [2024-11-06 10:02:55.243441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3697991 ] 00:10:52.074 [2024-11-06 10:02:55.337633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.074 [2024-11-06 10:02:55.373688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.645 10:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:52.645 10:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:10:52.645 10:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:52.905 Nvme0n1 00:10:53.166 10:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:53.166 [ 00:10:53.166 { 00:10:53.166 "name": "Nvme0n1", 00:10:53.166 "aliases": [ 00:10:53.166 "f1cfed6b-79e1-4d27-b19c-cff90a56ef15" 00:10:53.166 ], 00:10:53.166 "product_name": "NVMe disk", 00:10:53.166 "block_size": 4096, 00:10:53.166 "num_blocks": 38912, 00:10:53.166 "uuid": "f1cfed6b-79e1-4d27-b19c-cff90a56ef15", 00:10:53.166 "numa_id": 0, 00:10:53.166 "assigned_rate_limits": { 00:10:53.166 "rw_ios_per_sec": 0, 00:10:53.166 "rw_mbytes_per_sec": 0, 00:10:53.166 "r_mbytes_per_sec": 0, 00:10:53.166 "w_mbytes_per_sec": 0 00:10:53.166 }, 00:10:53.166 "claimed": false, 00:10:53.166 "zoned": false, 00:10:53.166 "supported_io_types": { 00:10:53.166 "read": true, 00:10:53.166 "write": true, 00:10:53.166 "unmap": true, 00:10:53.166 "flush": true, 00:10:53.166 "reset": true, 00:10:53.166 "nvme_admin": true, 00:10:53.166 "nvme_io": true, 00:10:53.166 "nvme_io_md": false, 00:10:53.166 "write_zeroes": true, 00:10:53.166 "zcopy": false, 00:10:53.166 "get_zone_info": false, 00:10:53.166 "zone_management": false, 00:10:53.166 "zone_append": false, 00:10:53.166 "compare": true, 00:10:53.166 "compare_and_write": true, 00:10:53.166 "abort": true, 00:10:53.166 "seek_hole": false, 00:10:53.166 "seek_data": false, 00:10:53.166 "copy": true, 00:10:53.166 "nvme_iov_md": false 00:10:53.166 }, 00:10:53.166 "memory_domains": [ 00:10:53.166 { 00:10:53.166 "dma_device_id": "system", 00:10:53.167 "dma_device_type": 1 00:10:53.167 } 00:10:53.167 ], 00:10:53.167 "driver_specific": { 00:10:53.167 "nvme": [ 00:10:53.167 { 00:10:53.167 "trid": { 00:10:53.167 "trtype": "TCP", 00:10:53.167 "adrfam": "IPv4", 00:10:53.167 "traddr": "10.0.0.2", 00:10:53.167 "trsvcid": "4420", 00:10:53.167 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:53.167 }, 00:10:53.167 "ctrlr_data": { 00:10:53.167 "cntlid": 1, 00:10:53.167 "vendor_id": "0x8086", 00:10:53.167 "model_number": "SPDK bdev Controller", 00:10:53.167 "serial_number": "SPDK0", 00:10:53.167 "firmware_revision": "25.01", 00:10:53.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:53.167 "oacs": { 00:10:53.167 "security": 0, 00:10:53.167 "format": 0, 00:10:53.167 "firmware": 0, 00:10:53.167 "ns_manage": 0 00:10:53.167 }, 00:10:53.167 "multi_ctrlr": true, 00:10:53.167 "ana_reporting": false 00:10:53.167 }, 00:10:53.167 "vs": { 00:10:53.167 "nvme_version": "1.3" 00:10:53.167 }, 00:10:53.167 "ns_data": { 00:10:53.167 "id": 1, 00:10:53.167 "can_share": true 00:10:53.167 } 00:10:53.167 } 00:10:53.167 ], 00:10:53.167 "mp_policy": "active_passive" 00:10:53.167 } 00:10:53.167 } 00:10:53.167 ] 00:10:53.167 10:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3698310 00:10:53.167 10:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:53.167 10:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:53.428 Running I/O for 10 seconds... 00:10:54.369 Latency(us) 00:10:54.369 [2024-11-06T09:02:57.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.369 Nvme0n1 : 1.00 17588.00 68.70 0.00 0.00 0.00 0.00 0.00 00:10:54.369 [2024-11-06T09:02:57.870Z] =================================================================================================================== 00:10:54.369 [2024-11-06T09:02:57.870Z] Total : 17588.00 68.70 0.00 0.00 0.00 0.00 0.00 00:10:54.369 00:10:55.312 10:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:10:55.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.312 Nvme0n1 : 2.00 17809.00 69.57 0.00 0.00 0.00 0.00 0.00 00:10:55.312 [2024-11-06T09:02:58.813Z] =================================================================================================================== 00:10:55.312 [2024-11-06T09:02:58.813Z] Total : 17809.00 69.57 0.00 0.00 0.00 0.00 0.00 00:10:55.312 00:10:55.312 true 00:10:55.312 10:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:10:55.312 10:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:55.573 10:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:55.573 10:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:55.574 10:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3698310 00:10:56.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.515 Nvme0n1 : 3.00 17861.67 69.77 0.00 0.00 0.00 0.00 0.00 00:10:56.515 [2024-11-06T09:03:00.016Z] =================================================================================================================== 00:10:56.515 [2024-11-06T09:03:00.016Z] Total : 17861.67 69.77 0.00 0.00 0.00 0.00 0.00 00:10:56.515 00:10:57.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.457 Nvme0n1 : 4.00 17894.50 69.90 0.00 0.00 0.00 0.00 0.00 00:10:57.457 [2024-11-06T09:03:00.958Z] =================================================================================================================== 00:10:57.457 [2024-11-06T09:03:00.958Z] Total : 17894.50 69.90 0.00 0.00 0.00 0.00 0.00 00:10:57.457 00:10:58.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.401 Nvme0n1 : 5.00 17932.60 70.05 0.00 0.00 0.00 0.00 0.00 00:10:58.401 [2024-11-06T09:03:01.902Z] =================================================================================================================== 00:10:58.401 [2024-11-06T09:03:01.902Z] Total : 17932.60 70.05 0.00 0.00 0.00 0.00 0.00 00:10:58.401 00:10:59.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.342 Nvme0n1 : 6.00 17950.50 70.12 0.00 0.00 0.00 0.00 0.00 00:10:59.342 [2024-11-06T09:03:02.843Z] =================================================================================================================== 00:10:59.342 [2024-11-06T09:03:02.843Z] Total : 17950.50 70.12 0.00 0.00 0.00 0.00 0.00 00:10:59.342 00:11:00.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.284 Nvme0n1 : 7.00 17977.57 70.22 0.00 0.00 0.00 0.00 0.00 00:11:00.284 [2024-11-06T09:03:03.785Z] =================================================================================================================== 00:11:00.284 [2024-11-06T09:03:03.785Z] Total : 17977.57 70.22 0.00 0.00 0.00 0.00 0.00 00:11:00.284 00:11:01.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.226 Nvme0n1 : 8.00 17998.62 70.31 0.00 0.00 0.00 0.00 0.00 00:11:01.226 [2024-11-06T09:03:04.728Z] =================================================================================================================== 00:11:01.227 [2024-11-06T09:03:04.728Z] Total : 17998.62 70.31 0.00 0.00 0.00 0.00 0.00 00:11:01.227 00:11:02.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.610 Nvme0n1 : 9.00 18010.22 70.35 0.00 0.00 0.00 0.00 0.00 00:11:02.610 [2024-11-06T09:03:06.111Z] =================================================================================================================== 00:11:02.610 [2024-11-06T09:03:06.111Z] Total : 18010.22 70.35 0.00 0.00 0.00 0.00 0.00 00:11:02.610 00:11:03.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.551 Nvme0n1 : 10.00 18027.10 70.42 0.00 0.00 0.00 0.00 0.00 00:11:03.551 [2024-11-06T09:03:07.052Z] =================================================================================================================== 00:11:03.551 [2024-11-06T09:03:07.052Z] Total : 18027.10 70.42 0.00 0.00 0.00 0.00 0.00 00:11:03.551 00:11:03.551 00:11:03.551 Latency(us) 00:11:03.551 [2024-11-06T09:03:07.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.551 Nvme0n1 : 10.01 18027.48 70.42 0.00 0.00 7097.83 4369.07 17257.81 00:11:03.551 [2024-11-06T09:03:07.052Z] =================================================================================================================== 00:11:03.551 [2024-11-06T09:03:07.052Z] Total : 18027.48 70.42 0.00 0.00 7097.83 4369.07 17257.81 00:11:03.551 { 00:11:03.551 "results": [ 00:11:03.551 { 00:11:03.551 "job": "Nvme0n1", 00:11:03.551 "core_mask": "0x2", 00:11:03.551 "workload": "randwrite", 00:11:03.551 "status": "finished", 00:11:03.551 "queue_depth": 128, 00:11:03.551 "io_size": 4096, 00:11:03.551 "runtime": 10.006892, 00:11:03.551 "iops": 18027.475463910272, 00:11:03.551 "mibps": 70.4198260308995, 00:11:03.551 "io_failed": 0, 00:11:03.551 "io_timeout": 0, 00:11:03.551 "avg_latency_us": 7097.833435735971, 00:11:03.551 "min_latency_us": 4369.066666666667, 00:11:03.551 "max_latency_us": 17257.81333333333 00:11:03.551 } 00:11:03.551 ], 00:11:03.551 "core_count": 1 00:11:03.551 } 00:11:03.551 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3697991 00:11:03.551 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3697991 ']' 00:11:03.551 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3697991 00:11:03.551 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:11:03.551 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:03.551 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3697991 00:11:03.551 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:03.551 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:03.551 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3697991' 00:11:03.551 killing process with pid 3697991 00:11:03.551 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3697991 00:11:03.551 Received shutdown signal, test time was about 10.000000 seconds 00:11:03.551 00:11:03.551 Latency(us) 00:11:03.551 [2024-11-06T09:03:07.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.551 [2024-11-06T09:03:07.052Z] =================================================================================================================== 00:11:03.551 [2024-11-06T09:03:07.052Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:03.551 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3697991 00:11:03.552 10:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:03.552 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:03.813 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:11:03.813 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:04.073 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:04.073 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:04.073 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:04.334 [2024-11-06 10:03:07.615148] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:04.334 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:11:04.334 request: 00:11:04.334 { 00:11:04.334 "uuid": "dd28c058-4e52-4efa-bf70-5b24885c4ce5", 00:11:04.334 "method": "bdev_lvol_get_lvstores", 00:11:04.334 "req_id": 1 00:11:04.334 } 00:11:04.334 Got JSON-RPC error response 00:11:04.334 response: 00:11:04.334 { 00:11:04.334 "code": -19, 00:11:04.334 "message": "No such device" 00:11:04.334 } 00:11:04.335 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:11:04.335 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:04.335 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:04.335 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:04.335 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:04.595 aio_bdev 00:11:04.595 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f1cfed6b-79e1-4d27-b19c-cff90a56ef15 00:11:04.595 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=f1cfed6b-79e1-4d27-b19c-cff90a56ef15 00:11:04.595 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:04.595 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:11:04.595 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:04.595 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:04.595 10:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:04.857 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f1cfed6b-79e1-4d27-b19c-cff90a56ef15 -t 2000 00:11:04.857 [ 00:11:04.857 { 00:11:04.857 "name": "f1cfed6b-79e1-4d27-b19c-cff90a56ef15", 00:11:04.857 "aliases": [ 00:11:04.857 "lvs/lvol" 00:11:04.857 ], 00:11:04.857 "product_name": "Logical Volume", 00:11:04.857 "block_size": 4096, 00:11:04.857 "num_blocks": 38912, 00:11:04.857 "uuid": "f1cfed6b-79e1-4d27-b19c-cff90a56ef15", 00:11:04.857 "assigned_rate_limits": { 00:11:04.857 "rw_ios_per_sec": 0, 00:11:04.857 "rw_mbytes_per_sec": 0, 00:11:04.857 "r_mbytes_per_sec": 0, 00:11:04.857 "w_mbytes_per_sec": 0 00:11:04.857 }, 00:11:04.857 "claimed": false, 00:11:04.857 "zoned": false, 00:11:04.857 "supported_io_types": { 00:11:04.857 "read": true, 00:11:04.857 "write": true, 00:11:04.857 "unmap": true, 00:11:04.857 "flush": false, 00:11:04.857 "reset": true, 00:11:04.857 "nvme_admin": false, 00:11:04.857 "nvme_io": false, 00:11:04.857 "nvme_io_md": false, 00:11:04.857 "write_zeroes": true, 00:11:04.857 "zcopy": false, 00:11:04.857 "get_zone_info": false, 00:11:04.857 "zone_management": false, 00:11:04.857 "zone_append": false, 00:11:04.857 "compare": false, 00:11:04.857 "compare_and_write": false, 00:11:04.857 "abort": false, 00:11:04.857 "seek_hole": true, 00:11:04.857 "seek_data": true, 00:11:04.857 "copy": false, 00:11:04.857 "nvme_iov_md": false 00:11:04.857 }, 00:11:04.857 "driver_specific": { 00:11:04.857 "lvol": { 00:11:04.857 "lvol_store_uuid": "dd28c058-4e52-4efa-bf70-5b24885c4ce5", 00:11:04.857 "base_bdev": "aio_bdev", 00:11:04.857 "thin_provision": false, 00:11:04.857 "num_allocated_clusters": 38, 00:11:04.857 "snapshot": false, 00:11:04.857 "clone": false, 00:11:04.857 "esnap_clone": false 00:11:04.857 } 00:11:04.857 } 00:11:04.857 } 00:11:04.857 ] 00:11:04.857 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:11:04.857 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:11:04.857 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:05.118 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:05.118 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:11:05.118 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:05.379 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:05.379 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f1cfed6b-79e1-4d27-b19c-cff90a56ef15 00:11:05.379 10:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd28c058-4e52-4efa-bf70-5b24885c4ce5 00:11:05.641 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:05.901 00:11:05.901 real 0m15.805s 00:11:05.901 user 0m15.514s 00:11:05.901 sys 0m1.302s 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:05.901 ************************************ 00:11:05.901 END TEST lvs_grow_clean 00:11:05.901 ************************************ 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:05.901 ************************************ 00:11:05.901 START TEST lvs_grow_dirty 00:11:05.901 ************************************ 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:05.901 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:06.162 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:06.162 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:06.422 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:06.422 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:06.422 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:06.422 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:06.422 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:06.422 10:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b lvol 150 00:11:06.683 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8a1237c0-b324-44db-9bfc-3fda9b90f809 00:11:06.683 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:06.683 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:06.683 [2024-11-06 10:03:10.164448] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:06.683 [2024-11-06 10:03:10.164505] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:06.683 true 00:11:06.683 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:06.683 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:06.943 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:06.943 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:07.204 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8a1237c0-b324-44db-9bfc-3fda9b90f809 00:11:07.204 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:07.464 [2024-11-06 10:03:10.826450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.464 10:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:07.725 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3701935 00:11:07.725 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:07.725 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:07.725 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3701935 /var/tmp/bdevperf.sock 00:11:07.725 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3701935 ']' 00:11:07.725 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:07.725 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:07.725 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:07.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:07.725 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:07.725 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:07.725 [2024-11-06 10:03:11.059310] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:07.725 [2024-11-06 10:03:11.059363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701935 ] 00:11:07.725 [2024-11-06 10:03:11.147679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.725 [2024-11-06 10:03:11.177637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.666 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:08.666 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:11:08.666 10:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:08.927 Nvme0n1 00:11:08.927 10:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:08.927 [ 00:11:08.927 { 00:11:08.927 "name": "Nvme0n1", 00:11:08.927 "aliases": [ 00:11:08.927 "8a1237c0-b324-44db-9bfc-3fda9b90f809" 00:11:08.927 ], 00:11:08.927 "product_name": "NVMe disk", 00:11:08.927 "block_size": 4096, 00:11:08.927 "num_blocks": 38912, 00:11:08.927 "uuid": "8a1237c0-b324-44db-9bfc-3fda9b90f809", 00:11:08.927 "numa_id": 0, 00:11:08.927 "assigned_rate_limits": { 00:11:08.927 "rw_ios_per_sec": 0, 00:11:08.927 "rw_mbytes_per_sec": 0, 00:11:08.927 "r_mbytes_per_sec": 0, 00:11:08.927 "w_mbytes_per_sec": 0 00:11:08.927 }, 00:11:08.927 "claimed": false, 00:11:08.927 "zoned": false, 00:11:08.927 "supported_io_types": { 00:11:08.927 "read": true, 00:11:08.927 "write": true, 00:11:08.927 "unmap": true, 00:11:08.927 "flush": true, 00:11:08.927 "reset": true, 00:11:08.927 "nvme_admin": true, 00:11:08.927 "nvme_io": true, 00:11:08.927 "nvme_io_md": false, 00:11:08.927 "write_zeroes": true, 00:11:08.927 "zcopy": false, 00:11:08.927 "get_zone_info": false, 00:11:08.927 "zone_management": false, 00:11:08.927 "zone_append": false, 00:11:08.927 "compare": true, 00:11:08.927 "compare_and_write": true, 00:11:08.927 "abort": true, 00:11:08.927 "seek_hole": false, 00:11:08.927 "seek_data": false, 00:11:08.927 "copy": true, 00:11:08.927 "nvme_iov_md": false 00:11:08.927 }, 00:11:08.927 "memory_domains": [ 00:11:08.927 { 00:11:08.927 "dma_device_id": "system", 00:11:08.927 "dma_device_type": 1 00:11:08.927 } 00:11:08.927 ], 00:11:08.927 "driver_specific": { 00:11:08.927 "nvme": [ 00:11:08.927 { 00:11:08.927 "trid": { 00:11:08.927 "trtype": "TCP", 00:11:08.927 "adrfam": "IPv4", 00:11:08.927 "traddr": "10.0.0.2", 00:11:08.927 "trsvcid": "4420", 00:11:08.927 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:08.927 }, 00:11:08.927 "ctrlr_data": { 00:11:08.927 "cntlid": 1, 00:11:08.927 "vendor_id": "0x8086", 00:11:08.927 "model_number": "SPDK bdev Controller", 00:11:08.928 "serial_number": "SPDK0", 00:11:08.928 "firmware_revision": "25.01", 00:11:08.928 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:08.928 "oacs": { 00:11:08.928 "security": 0, 00:11:08.928 "format": 0, 00:11:08.928 "firmware": 0, 00:11:08.928 "ns_manage": 0 00:11:08.928 }, 00:11:08.928 "multi_ctrlr": true, 00:11:08.928 "ana_reporting": false 00:11:08.928 }, 00:11:08.928 "vs": { 00:11:08.928 "nvme_version": "1.3" 00:11:08.928 }, 00:11:08.928 "ns_data": { 00:11:08.928 "id": 1, 00:11:08.928 "can_share": true 00:11:08.928 } 00:11:08.928 } 00:11:08.928 ], 00:11:08.928 "mp_policy": "active_passive" 00:11:08.928 } 00:11:08.928 } 00:11:08.928 ] 00:11:08.928 10:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3702147 00:11:08.928 10:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:08.928 10:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:09.188 Running I/O for 10 seconds... 00:11:10.129 Latency(us) 00:11:10.129 [2024-11-06T09:03:13.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:10.129 Nvme0n1 : 1.00 17705.00 69.16 0.00 0.00 0.00 0.00 0.00 00:11:10.129 [2024-11-06T09:03:13.630Z] =================================================================================================================== 00:11:10.129 [2024-11-06T09:03:13.630Z] Total : 17705.00 69.16 0.00 0.00 0.00 0.00 0.00 00:11:10.129 00:11:11.071 10:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:11.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:11.071 Nvme0n1 : 2.00 17835.50 69.67 0.00 0.00 0.00 0.00 0.00 00:11:11.071 [2024-11-06T09:03:14.572Z] =================================================================================================================== 00:11:11.071 [2024-11-06T09:03:14.572Z] Total : 17835.50 69.67 0.00 0.00 0.00 0.00 0.00 00:11:11.071 00:11:11.331 true 00:11:11.331 10:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:11.331 10:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:11.331 10:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:11.331 10:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:11.331 10:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3702147 00:11:12.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:12.320 Nvme0n1 : 3.00 17883.33 69.86 0.00 0.00 0.00 0.00 0.00 00:11:12.320 [2024-11-06T09:03:15.821Z] =================================================================================================================== 00:11:12.320 [2024-11-06T09:03:15.822Z] Total : 17883.33 69.86 0.00 0.00 0.00 0.00 0.00 00:11:12.321 00:11:13.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.261 Nvme0n1 : 4.00 17942.75 70.09 0.00 0.00 0.00 0.00 0.00 00:11:13.261 [2024-11-06T09:03:16.762Z] =================================================================================================================== 00:11:13.261 [2024-11-06T09:03:16.762Z] Total : 17942.75 70.09 0.00 0.00 0.00 0.00 0.00 00:11:13.261 00:11:14.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.204 Nvme0n1 : 5.00 17969.60 70.19 0.00 0.00 0.00 0.00 0.00 00:11:14.204 [2024-11-06T09:03:17.705Z] =================================================================================================================== 00:11:14.204 [2024-11-06T09:03:17.705Z] Total : 17969.60 70.19 0.00 0.00 0.00 0.00 0.00 00:11:14.204 00:11:15.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:15.144 Nvme0n1 : 6.00 17992.67 70.28 0.00 0.00 0.00 0.00 0.00 00:11:15.144 [2024-11-06T09:03:18.645Z] =================================================================================================================== 00:11:15.144 [2024-11-06T09:03:18.645Z] Total : 17992.67 70.28 0.00 0.00 0.00 0.00 0.00 00:11:15.144 00:11:16.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.085 Nvme0n1 : 7.00 17998.86 70.31 0.00 0.00 0.00 0.00 0.00 00:11:16.085 [2024-11-06T09:03:19.586Z] =================================================================================================================== 00:11:16.085 [2024-11-06T09:03:19.586Z] Total : 17998.86 70.31 0.00 0.00 0.00 0.00 0.00 00:11:16.085 00:11:17.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.025 Nvme0n1 : 8.00 18016.00 70.38 0.00 0.00 0.00 0.00 0.00 00:11:17.025 [2024-11-06T09:03:20.526Z] =================================================================================================================== 00:11:17.025 [2024-11-06T09:03:20.526Z] Total : 18016.00 70.38 0.00 0.00 0.00 0.00 0.00 00:11:17.025 00:11:18.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.405 Nvme0n1 : 9.00 18030.89 70.43 0.00 0.00 0.00 0.00 0.00 00:11:18.405 [2024-11-06T09:03:21.906Z] =================================================================================================================== 00:11:18.405 [2024-11-06T09:03:21.906Z] Total : 18030.89 70.43 0.00 0.00 0.00 0.00 0.00 00:11:18.405 00:11:19.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.347 Nvme0n1 : 10.00 18038.40 70.46 0.00 0.00 0.00 0.00 0.00 00:11:19.347 [2024-11-06T09:03:22.848Z] =================================================================================================================== 00:11:19.347 [2024-11-06T09:03:22.848Z] Total : 18038.40 70.46 0.00 0.00 0.00 0.00 0.00 00:11:19.347 00:11:19.347 00:11:19.347 Latency(us) 00:11:19.347 [2024-11-06T09:03:22.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.347 Nvme0n1 : 10.00 18040.73 70.47 0.00 0.00 7093.43 4287.15 15073.28 00:11:19.347 [2024-11-06T09:03:22.848Z] =================================================================================================================== 00:11:19.347 [2024-11-06T09:03:22.848Z] Total : 18040.73 70.47 0.00 0.00 7093.43 4287.15 15073.28 00:11:19.347 { 00:11:19.347 "results": [ 00:11:19.347 { 00:11:19.347 "job": "Nvme0n1", 00:11:19.347 "core_mask": "0x2", 00:11:19.347 "workload": "randwrite", 00:11:19.347 "status": "finished", 00:11:19.347 "queue_depth": 128, 00:11:19.347 "io_size": 4096, 00:11:19.347 "runtime": 10.002254, 00:11:19.347 "iops": 18040.733618642356, 00:11:19.347 "mibps": 70.4716156978217, 00:11:19.347 "io_failed": 0, 00:11:19.347 "io_timeout": 0, 00:11:19.347 "avg_latency_us": 7093.430712301236, 00:11:19.347 "min_latency_us": 4287.1466666666665, 00:11:19.347 "max_latency_us": 15073.28 00:11:19.347 } 00:11:19.347 ], 00:11:19.347 "core_count": 1 00:11:19.347 } 00:11:19.347 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3701935 00:11:19.347 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3701935 ']' 00:11:19.347 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3701935 00:11:19.348 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:11:19.348 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:19.348 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3701935 00:11:19.348 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:19.348 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:19.348 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3701935' 00:11:19.348 killing process with pid 3701935 00:11:19.348 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3701935 00:11:19.348 Received shutdown signal, test time was about 10.000000 seconds 00:11:19.348 00:11:19.348 Latency(us) 00:11:19.348 [2024-11-06T09:03:22.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.348 [2024-11-06T09:03:22.849Z] =================================================================================================================== 00:11:19.348 [2024-11-06T09:03:22.849Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:19.348 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3701935 00:11:19.348 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.608 10:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:19.608 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:19.608 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3697532 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3697532 00:11:19.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3697532 Killed "${NVMF_APP[@]}" "$@" 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3704307 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3704307 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3704307 ']' 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:19.868 10:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:19.868 [2024-11-06 10:03:23.301723] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:19.868 [2024-11-06 10:03:23.301779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.130 [2024-11-06 10:03:23.386288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.130 [2024-11-06 10:03:23.421295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.130 [2024-11-06 10:03:23.421326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.130 [2024-11-06 10:03:23.421334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.130 [2024-11-06 10:03:23.421341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.130 [2024-11-06 10:03:23.421347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.130 [2024-11-06 10:03:23.421892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.701 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:20.701 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:11:20.701 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:20.701 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:20.701 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:20.701 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.701 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:20.962 [2024-11-06 10:03:24.280399] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:20.962 [2024-11-06 10:03:24.280490] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:20.962 [2024-11-06 10:03:24.280521] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:20.962 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:20.962 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8a1237c0-b324-44db-9bfc-3fda9b90f809 00:11:20.962 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=8a1237c0-b324-44db-9bfc-3fda9b90f809 00:11:20.962 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:20.962 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:20.962 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:20.962 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:20.962 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:21.222 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8a1237c0-b324-44db-9bfc-3fda9b90f809 -t 2000 00:11:21.222 [ 00:11:21.222 { 00:11:21.222 "name": "8a1237c0-b324-44db-9bfc-3fda9b90f809", 00:11:21.222 "aliases": [ 00:11:21.222 "lvs/lvol" 00:11:21.222 ], 00:11:21.222 "product_name": "Logical Volume", 00:11:21.222 "block_size": 4096, 00:11:21.222 "num_blocks": 38912, 00:11:21.222 "uuid": "8a1237c0-b324-44db-9bfc-3fda9b90f809", 00:11:21.222 "assigned_rate_limits": { 00:11:21.222 "rw_ios_per_sec": 0, 00:11:21.222 "rw_mbytes_per_sec": 0, 00:11:21.222 "r_mbytes_per_sec": 0, 00:11:21.222 "w_mbytes_per_sec": 0 00:11:21.222 }, 00:11:21.222 "claimed": false, 00:11:21.222 "zoned": false, 00:11:21.222 "supported_io_types": { 00:11:21.222 "read": true, 00:11:21.222 "write": true, 00:11:21.222 "unmap": true, 00:11:21.222 "flush": false, 00:11:21.222 "reset": true, 00:11:21.222 "nvme_admin": false, 00:11:21.222 "nvme_io": false, 00:11:21.222 "nvme_io_md": false, 00:11:21.222 "write_zeroes": true, 00:11:21.222 "zcopy": false, 00:11:21.222 "get_zone_info": false, 00:11:21.222 "zone_management": false, 00:11:21.222 "zone_append": false, 00:11:21.222 "compare": false, 00:11:21.222 "compare_and_write": false, 00:11:21.222 "abort": false, 00:11:21.222 "seek_hole": true, 00:11:21.222 "seek_data": true, 00:11:21.222 "copy": false, 00:11:21.222 "nvme_iov_md": false 00:11:21.222 }, 00:11:21.222 "driver_specific": { 00:11:21.222 "lvol": { 00:11:21.222 "lvol_store_uuid": "6cbf7742-f7dc-48d2-9795-d4316f541e1b", 00:11:21.222 "base_bdev": "aio_bdev", 00:11:21.222 "thin_provision": false, 00:11:21.222 "num_allocated_clusters": 38, 00:11:21.222 "snapshot": false, 00:11:21.222 "clone": false, 00:11:21.222 "esnap_clone": false 00:11:21.222 } 00:11:21.222 } 00:11:21.222 } 00:11:21.222 ] 00:11:21.222 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:21.222 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:21.222 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:21.482 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:21.482 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:21.482 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:21.482 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:21.482 10:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:21.742 [2024-11-06 10:03:25.124552] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:21.742 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:21.742 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:21.742 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:21.743 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.743 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.743 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.743 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.743 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.743 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.743 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.743 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:21.743 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:22.003 request: 00:11:22.003 { 00:11:22.003 "uuid": "6cbf7742-f7dc-48d2-9795-d4316f541e1b", 00:11:22.003 "method": "bdev_lvol_get_lvstores", 00:11:22.003 "req_id": 1 00:11:22.003 } 00:11:22.003 Got JSON-RPC error response 00:11:22.003 response: 00:11:22.003 { 00:11:22.003 "code": -19, 00:11:22.003 "message": "No such device" 00:11:22.003 } 00:11:22.003 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:22.003 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:22.003 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:22.003 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:22.003 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:22.003 aio_bdev 00:11:22.263 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8a1237c0-b324-44db-9bfc-3fda9b90f809 00:11:22.263 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=8a1237c0-b324-44db-9bfc-3fda9b90f809 00:11:22.263 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:22.263 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:22.263 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:22.263 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:22.263 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:22.263 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8a1237c0-b324-44db-9bfc-3fda9b90f809 -t 2000 00:11:22.523 [ 00:11:22.523 { 00:11:22.523 "name": "8a1237c0-b324-44db-9bfc-3fda9b90f809", 00:11:22.523 "aliases": [ 00:11:22.523 "lvs/lvol" 00:11:22.523 ], 00:11:22.523 "product_name": "Logical Volume", 00:11:22.523 "block_size": 4096, 00:11:22.523 "num_blocks": 38912, 00:11:22.523 "uuid": "8a1237c0-b324-44db-9bfc-3fda9b90f809", 00:11:22.523 "assigned_rate_limits": { 00:11:22.523 "rw_ios_per_sec": 0, 00:11:22.523 "rw_mbytes_per_sec": 0, 00:11:22.523 "r_mbytes_per_sec": 0, 00:11:22.523 "w_mbytes_per_sec": 0 00:11:22.523 }, 00:11:22.523 "claimed": false, 00:11:22.523 "zoned": false, 00:11:22.523 "supported_io_types": { 00:11:22.523 "read": true, 00:11:22.523 "write": true, 00:11:22.523 "unmap": true, 00:11:22.523 "flush": false, 00:11:22.523 "reset": true, 00:11:22.523 "nvme_admin": false, 00:11:22.523 "nvme_io": false, 00:11:22.523 "nvme_io_md": false, 00:11:22.523 "write_zeroes": true, 00:11:22.523 "zcopy": false, 00:11:22.523 "get_zone_info": false, 00:11:22.523 "zone_management": false, 00:11:22.523 "zone_append": false, 00:11:22.523 "compare": false, 00:11:22.523 "compare_and_write": false, 00:11:22.523 "abort": false, 00:11:22.523 "seek_hole": true, 00:11:22.523 "seek_data": true, 00:11:22.523 "copy": false, 00:11:22.523 "nvme_iov_md": false 00:11:22.523 }, 00:11:22.523 "driver_specific": { 00:11:22.523 "lvol": { 00:11:22.523 "lvol_store_uuid": "6cbf7742-f7dc-48d2-9795-d4316f541e1b", 00:11:22.523 "base_bdev": "aio_bdev", 00:11:22.523 "thin_provision": false, 00:11:22.523 "num_allocated_clusters": 38, 00:11:22.523 "snapshot": false, 00:11:22.523 "clone": false, 00:11:22.523 "esnap_clone": false 00:11:22.523 } 00:11:22.523 } 00:11:22.523 } 00:11:22.523 ] 00:11:22.523 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:22.523 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:22.523 10:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:22.523 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:22.523 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:22.523 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:22.783 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:22.783 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8a1237c0-b324-44db-9bfc-3fda9b90f809 00:11:23.043 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6cbf7742-f7dc-48d2-9795-d4316f541e1b 00:11:23.044 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:23.304 00:11:23.304 real 0m17.408s 00:11:23.304 user 0m45.647s 00:11:23.304 sys 0m2.998s 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:23.304 ************************************ 00:11:23.304 END TEST lvs_grow_dirty 00:11:23.304 ************************************ 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:11:23.304 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:23.304 nvmf_trace.0 00:11:23.564 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:11:23.564 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:23.564 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.564 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.565 rmmod nvme_tcp 00:11:23.565 rmmod nvme_fabrics 00:11:23.565 rmmod nvme_keyring 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3704307 ']' 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3704307 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3704307 ']' 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3704307 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3704307 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3704307' 00:11:23.565 killing process with pid 3704307 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3704307 00:11:23.565 10:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3704307 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.825 10:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.741 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:25.741 00:11:25.741 real 0m45.533s 00:11:25.741 user 1m7.842s 00:11:25.741 sys 0m11.039s 00:11:25.741 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:25.741 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:25.741 ************************************ 00:11:25.741 END TEST nvmf_lvs_grow 00:11:25.741 ************************************ 00:11:25.741 10:03:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:25.741 10:03:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:25.741 10:03:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:25.741 10:03:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:25.741 ************************************ 00:11:25.741 START TEST nvmf_bdev_io_wait 00:11:25.741 ************************************ 00:11:25.741 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:26.006 * Looking for test storage... 00:11:26.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.006 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:26.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.006 --rc genhtml_branch_coverage=1 00:11:26.007 --rc genhtml_function_coverage=1 00:11:26.007 --rc genhtml_legend=1 00:11:26.007 --rc geninfo_all_blocks=1 00:11:26.007 --rc geninfo_unexecuted_blocks=1 00:11:26.007 00:11:26.007 ' 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:26.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.007 --rc genhtml_branch_coverage=1 00:11:26.007 --rc genhtml_function_coverage=1 00:11:26.007 --rc genhtml_legend=1 00:11:26.007 --rc geninfo_all_blocks=1 00:11:26.007 --rc geninfo_unexecuted_blocks=1 00:11:26.007 00:11:26.007 ' 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:26.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.007 --rc genhtml_branch_coverage=1 00:11:26.007 --rc genhtml_function_coverage=1 00:11:26.007 --rc genhtml_legend=1 00:11:26.007 --rc geninfo_all_blocks=1 00:11:26.007 --rc geninfo_unexecuted_blocks=1 00:11:26.007 00:11:26.007 ' 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:26.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.007 --rc genhtml_branch_coverage=1 00:11:26.007 --rc genhtml_function_coverage=1 00:11:26.007 --rc genhtml_legend=1 00:11:26.007 --rc geninfo_all_blocks=1 00:11:26.007 --rc geninfo_unexecuted_blocks=1 00:11:26.007 00:11:26.007 ' 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:26.007 10:03:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:34.238 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:34.238 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:34.238 Found net devices under 0000:31:00.0: cvl_0_0 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:34.238 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:34.239 Found net devices under 0000:31:00.1: cvl_0_1 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.239 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:34.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:11:34.500 00:11:34.500 --- 10.0.0.2 ping statistics --- 00:11:34.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.500 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:11:34.500 00:11:34.500 --- 10.0.0.1 ping statistics --- 00:11:34.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.500 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3709907 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3709907 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3709907 ']' 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:34.500 10:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.500 [2024-11-06 10:03:37.999488] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:34.500 [2024-11-06 10:03:37.999562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.761 [2024-11-06 10:03:38.094477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.761 [2024-11-06 10:03:38.138547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.761 [2024-11-06 10:03:38.138586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.761 [2024-11-06 10:03:38.138594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.761 [2024-11-06 10:03:38.138601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.761 [2024-11-06 10:03:38.138607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.761 [2024-11-06 10:03:38.140469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.761 [2024-11-06 10:03:38.140586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.761 [2024-11-06 10:03:38.140746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.761 [2024-11-06 10:03:38.140746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.332 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:35.332 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:11:35.332 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.332 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.332 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.592 [2024-11-06 10:03:38.909344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.592 Malloc0 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.592 [2024-11-06 10:03:38.968610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3710103 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3710105 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.592 { 00:11:35.592 "params": { 00:11:35.592 "name": "Nvme$subsystem", 00:11:35.592 "trtype": "$TEST_TRANSPORT", 00:11:35.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.592 "adrfam": "ipv4", 00:11:35.592 "trsvcid": "$NVMF_PORT", 00:11:35.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.592 "hdgst": ${hdgst:-false}, 00:11:35.592 "ddgst": ${ddgst:-false} 00:11:35.592 }, 00:11:35.592 "method": "bdev_nvme_attach_controller" 00:11:35.592 } 00:11:35.592 EOF 00:11:35.592 )") 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3710107 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3710110 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.592 { 00:11:35.592 "params": { 00:11:35.592 "name": "Nvme$subsystem", 00:11:35.592 "trtype": "$TEST_TRANSPORT", 00:11:35.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.592 "adrfam": "ipv4", 00:11:35.592 "trsvcid": "$NVMF_PORT", 00:11:35.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.592 "hdgst": ${hdgst:-false}, 00:11:35.592 "ddgst": ${ddgst:-false} 00:11:35.592 }, 00:11:35.592 "method": "bdev_nvme_attach_controller" 00:11:35.592 } 00:11:35.592 EOF 00:11:35.592 )") 00:11:35.592 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.593 { 00:11:35.593 "params": { 00:11:35.593 "name": "Nvme$subsystem", 00:11:35.593 "trtype": "$TEST_TRANSPORT", 00:11:35.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.593 "adrfam": "ipv4", 00:11:35.593 "trsvcid": "$NVMF_PORT", 00:11:35.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.593 "hdgst": ${hdgst:-false}, 00:11:35.593 "ddgst": ${ddgst:-false} 00:11:35.593 }, 00:11:35.593 "method": "bdev_nvme_attach_controller" 00:11:35.593 } 00:11:35.593 EOF 00:11:35.593 )") 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.593 { 00:11:35.593 "params": { 00:11:35.593 "name": "Nvme$subsystem", 00:11:35.593 "trtype": "$TEST_TRANSPORT", 00:11:35.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.593 "adrfam": "ipv4", 00:11:35.593 "trsvcid": "$NVMF_PORT", 00:11:35.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.593 "hdgst": ${hdgst:-false}, 00:11:35.593 "ddgst": ${ddgst:-false} 00:11:35.593 }, 00:11:35.593 "method": "bdev_nvme_attach_controller" 00:11:35.593 } 00:11:35.593 EOF 00:11:35.593 )") 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3710103 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.593 "params": { 00:11:35.593 "name": "Nvme1", 00:11:35.593 "trtype": "tcp", 00:11:35.593 "traddr": "10.0.0.2", 00:11:35.593 "adrfam": "ipv4", 00:11:35.593 "trsvcid": "4420", 00:11:35.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.593 "hdgst": false, 00:11:35.593 "ddgst": false 00:11:35.593 }, 00:11:35.593 "method": "bdev_nvme_attach_controller" 00:11:35.593 }' 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.593 "params": { 00:11:35.593 "name": "Nvme1", 00:11:35.593 "trtype": "tcp", 00:11:35.593 "traddr": "10.0.0.2", 00:11:35.593 "adrfam": "ipv4", 00:11:35.593 "trsvcid": "4420", 00:11:35.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.593 "hdgst": false, 00:11:35.593 "ddgst": false 00:11:35.593 }, 00:11:35.593 "method": "bdev_nvme_attach_controller" 00:11:35.593 }' 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.593 "params": { 00:11:35.593 "name": "Nvme1", 00:11:35.593 "trtype": "tcp", 00:11:35.593 "traddr": "10.0.0.2", 00:11:35.593 "adrfam": "ipv4", 00:11:35.593 "trsvcid": "4420", 00:11:35.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.593 "hdgst": false, 00:11:35.593 "ddgst": false 00:11:35.593 }, 00:11:35.593 "method": "bdev_nvme_attach_controller" 00:11:35.593 }' 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:35.593 10:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.593 "params": { 00:11:35.593 "name": "Nvme1", 00:11:35.593 "trtype": "tcp", 00:11:35.593 "traddr": "10.0.0.2", 00:11:35.593 "adrfam": "ipv4", 00:11:35.593 "trsvcid": "4420", 00:11:35.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.593 "hdgst": false, 00:11:35.593 "ddgst": false 00:11:35.593 }, 00:11:35.593 "method": "bdev_nvme_attach_controller" 00:11:35.593 }' 00:11:35.593 [2024-11-06 10:03:39.025016] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:35.593 [2024-11-06 10:03:39.025069] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:35.593 [2024-11-06 10:03:39.026127] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:35.593 [2024-11-06 10:03:39.026176] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:35.593 [2024-11-06 10:03:39.026732] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:35.593 [2024-11-06 10:03:39.026778] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:35.593 [2024-11-06 10:03:39.027200] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:35.593 [2024-11-06 10:03:39.027247] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:35.853 [2024-11-06 10:03:39.197394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.853 [2024-11-06 10:03:39.227310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:35.853 [2024-11-06 10:03:39.239384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.853 [2024-11-06 10:03:39.267918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:35.853 [2024-11-06 10:03:39.286717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.853 [2024-11-06 10:03:39.315451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:35.853 [2024-11-06 10:03:39.346664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.113 [2024-11-06 10:03:39.375125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:36.113 Running I/O for 1 seconds... 00:11:36.113 Running I/O for 1 seconds... 00:11:36.113 Running I/O for 1 seconds... 00:11:36.113 Running I/O for 1 seconds... 00:11:37.054 11453.00 IOPS, 44.74 MiB/s 00:11:37.054 Latency(us) 00:11:37.054 [2024-11-06T09:03:40.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.054 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:37.054 Nvme1n1 : 1.01 11449.42 44.72 0.00 0.00 11105.83 4942.51 18568.53 00:11:37.054 [2024-11-06T09:03:40.555Z] =================================================================================================================== 00:11:37.054 [2024-11-06T09:03:40.555Z] Total : 11449.42 44.72 0.00 0.00 11105.83 4942.51 18568.53 00:11:37.315 14323.00 IOPS, 55.95 MiB/s [2024-11-06T09:03:40.816Z] 11294.00 IOPS, 44.12 MiB/s 00:11:37.315 Latency(us) 00:11:37.315 [2024-11-06T09:03:40.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.315 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:37.315 Nvme1n1 : 1.01 14383.54 56.19 0.00 0.00 8870.30 4560.21 21408.43 00:11:37.315 [2024-11-06T09:03:40.816Z] =================================================================================================================== 00:11:37.315 [2024-11-06T09:03:40.816Z] Total : 14383.54 56.19 0.00 0.00 8870.30 4560.21 21408.43 00:11:37.315 00:11:37.315 Latency(us) 00:11:37.315 [2024-11-06T09:03:40.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.315 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:37.315 Nvme1n1 : 1.01 11416.24 44.59 0.00 0.00 11187.88 2894.51 25668.27 00:11:37.315 [2024-11-06T09:03:40.816Z] =================================================================================================================== 00:11:37.315 [2024-11-06T09:03:40.816Z] Total : 11416.24 44.59 0.00 0.00 11187.88 2894.51 25668.27 00:11:37.315 188480.00 IOPS, 736.25 MiB/s 00:11:37.315 Latency(us) 00:11:37.315 [2024-11-06T09:03:40.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.315 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:37.315 Nvme1n1 : 1.00 188101.26 734.77 0.00 0.00 677.07 303.79 1979.73 00:11:37.315 [2024-11-06T09:03:40.816Z] =================================================================================================================== 00:11:37.315 [2024-11-06T09:03:40.816Z] Total : 188101.26 734.77 0.00 0.00 677.07 303.79 1979.73 00:11:37.315 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3710105 00:11:37.315 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3710107 00:11:37.315 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3710110 00:11:37.315 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.315 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.315 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.315 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.315 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:37.315 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:37.315 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:37.315 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.316 rmmod nvme_tcp 00:11:37.316 rmmod nvme_fabrics 00:11:37.316 rmmod nvme_keyring 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3709907 ']' 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3709907 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3709907 ']' 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3709907 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:37.316 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3709907 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3709907' 00:11:37.576 killing process with pid 3709907 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3709907 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3709907 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.576 10:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.121 00:11:40.121 real 0m13.822s 00:11:40.121 user 0m19.014s 00:11:40.121 sys 0m7.866s 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.121 ************************************ 00:11:40.121 END TEST nvmf_bdev_io_wait 00:11:40.121 ************************************ 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:40.121 ************************************ 00:11:40.121 START TEST nvmf_queue_depth 00:11:40.121 ************************************ 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:40.121 * Looking for test storage... 00:11:40.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:40.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.121 --rc genhtml_branch_coverage=1 00:11:40.121 --rc genhtml_function_coverage=1 00:11:40.121 --rc genhtml_legend=1 00:11:40.121 --rc geninfo_all_blocks=1 00:11:40.121 --rc geninfo_unexecuted_blocks=1 00:11:40.121 00:11:40.121 ' 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:40.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.121 --rc genhtml_branch_coverage=1 00:11:40.121 --rc genhtml_function_coverage=1 00:11:40.121 --rc genhtml_legend=1 00:11:40.121 --rc geninfo_all_blocks=1 00:11:40.121 --rc geninfo_unexecuted_blocks=1 00:11:40.121 00:11:40.121 ' 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:40.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.121 --rc genhtml_branch_coverage=1 00:11:40.121 --rc genhtml_function_coverage=1 00:11:40.121 --rc genhtml_legend=1 00:11:40.121 --rc geninfo_all_blocks=1 00:11:40.121 --rc geninfo_unexecuted_blocks=1 00:11:40.121 00:11:40.121 ' 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:40.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.121 --rc genhtml_branch_coverage=1 00:11:40.121 --rc genhtml_function_coverage=1 00:11:40.121 --rc genhtml_legend=1 00:11:40.121 --rc geninfo_all_blocks=1 00:11:40.121 --rc geninfo_unexecuted_blocks=1 00:11:40.121 00:11:40.121 ' 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.121 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.122 10:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:48.266 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:48.267 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:48.267 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:48.267 Found net devices under 0000:31:00.0: cvl_0_0 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:48.267 Found net devices under 0000:31:00.1: cvl_0_1 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:48.267 10:03:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:48.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:11:48.267 00:11:48.267 --- 10.0.0.2 ping statistics --- 00:11:48.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.267 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:48.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:11:48.267 00:11:48.267 --- 10.0.0.1 ping statistics --- 00:11:48.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.267 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3715161 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3715161 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3715161 ']' 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:48.267 10:03:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.267 [2024-11-06 10:03:51.241668] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:48.267 [2024-11-06 10:03:51.241730] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.267 [2024-11-06 10:03:51.354208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.267 [2024-11-06 10:03:51.405882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.267 [2024-11-06 10:03:51.405943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.267 [2024-11-06 10:03:51.405953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.267 [2024-11-06 10:03:51.405961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.267 [2024-11-06 10:03:51.405967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.267 [2024-11-06 10:03:51.406827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.840 [2024-11-06 10:03:52.110673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.840 Malloc0 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.840 [2024-11-06 10:03:52.172016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3715508 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3715508 /var/tmp/bdevperf.sock 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3715508 ']' 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:48.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:48.840 10:03:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.840 [2024-11-06 10:03:52.231099] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:48.840 [2024-11-06 10:03:52.231171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715508 ] 00:11:48.840 [2024-11-06 10:03:52.314660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.100 [2024-11-06 10:03:52.357083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.671 10:03:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:49.671 10:03:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:49.671 10:03:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:49.671 10:03:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.671 10:03:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:49.931 NVMe0n1 00:11:49.931 10:03:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.931 10:03:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:49.931 Running I/O for 10 seconds... 00:11:52.259 10240.00 IOPS, 40.00 MiB/s [2024-11-06T09:03:56.702Z] 10905.50 IOPS, 42.60 MiB/s [2024-11-06T09:03:57.644Z] 11258.67 IOPS, 43.98 MiB/s [2024-11-06T09:03:58.586Z] 11284.75 IOPS, 44.08 MiB/s [2024-11-06T09:03:59.527Z] 11423.00 IOPS, 44.62 MiB/s [2024-11-06T09:04:00.468Z] 11460.00 IOPS, 44.77 MiB/s [2024-11-06T09:04:01.407Z] 11550.86 IOPS, 45.12 MiB/s [2024-11-06T09:04:02.791Z] 11562.75 IOPS, 45.17 MiB/s [2024-11-06T09:04:03.733Z] 11602.11 IOPS, 45.32 MiB/s [2024-11-06T09:04:03.733Z] 11644.00 IOPS, 45.48 MiB/s 00:12:00.232 Latency(us) 00:12:00.232 [2024-11-06T09:04:03.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.232 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:00.232 Verification LBA range: start 0x0 length 0x4000 00:12:00.232 NVMe0n1 : 10.06 11655.65 45.53 0.00 0.00 87492.74 20316.16 62914.56 00:12:00.232 [2024-11-06T09:04:03.733Z] =================================================================================================================== 00:12:00.232 [2024-11-06T09:04:03.733Z] Total : 11655.65 45.53 0.00 0.00 87492.74 20316.16 62914.56 00:12:00.232 { 00:12:00.232 "results": [ 00:12:00.232 { 00:12:00.232 "job": "NVMe0n1", 00:12:00.232 "core_mask": "0x1", 00:12:00.232 "workload": "verify", 00:12:00.232 "status": "finished", 00:12:00.232 "verify_range": { 00:12:00.232 "start": 0, 00:12:00.232 "length": 16384 00:12:00.232 }, 00:12:00.232 "queue_depth": 1024, 00:12:00.232 "io_size": 4096, 00:12:00.232 "runtime": 10.0571, 00:12:00.232 "iops": 11655.646259856221, 00:12:00.232 "mibps": 45.529868202563364, 00:12:00.232 "io_failed": 0, 00:12:00.232 "io_timeout": 0, 00:12:00.233 "avg_latency_us": 87492.73573720518, 00:12:00.233 "min_latency_us": 20316.16, 00:12:00.233 "max_latency_us": 62914.56 00:12:00.233 } 00:12:00.233 ], 00:12:00.233 "core_count": 1 00:12:00.233 } 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3715508 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3715508 ']' 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3715508 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3715508 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3715508' 00:12:00.233 killing process with pid 3715508 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3715508 00:12:00.233 Received shutdown signal, test time was about 10.000000 seconds 00:12:00.233 00:12:00.233 Latency(us) 00:12:00.233 [2024-11-06T09:04:03.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.233 [2024-11-06T09:04:03.734Z] =================================================================================================================== 00:12:00.233 [2024-11-06T09:04:03.734Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3715508 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.233 rmmod nvme_tcp 00:12:00.233 rmmod nvme_fabrics 00:12:00.233 rmmod nvme_keyring 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3715161 ']' 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3715161 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3715161 ']' 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3715161 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:00.233 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3715161 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3715161' 00:12:00.494 killing process with pid 3715161 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3715161 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3715161 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.494 10:04:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.036 10:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:03.036 00:12:03.036 real 0m22.842s 00:12:03.036 user 0m25.377s 00:12:03.036 sys 0m7.508s 00:12:03.036 10:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:03.036 10:04:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.036 ************************************ 00:12:03.036 END TEST nvmf_queue_depth 00:12:03.036 ************************************ 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:03.036 ************************************ 00:12:03.036 START TEST nvmf_target_multipath 00:12:03.036 ************************************ 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:03.036 * Looking for test storage... 00:12:03.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:03.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.036 --rc genhtml_branch_coverage=1 00:12:03.036 --rc genhtml_function_coverage=1 00:12:03.036 --rc genhtml_legend=1 00:12:03.036 --rc geninfo_all_blocks=1 00:12:03.036 --rc geninfo_unexecuted_blocks=1 00:12:03.036 00:12:03.036 ' 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:03.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.036 --rc genhtml_branch_coverage=1 00:12:03.036 --rc genhtml_function_coverage=1 00:12:03.036 --rc genhtml_legend=1 00:12:03.036 --rc geninfo_all_blocks=1 00:12:03.036 --rc geninfo_unexecuted_blocks=1 00:12:03.036 00:12:03.036 ' 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:03.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.036 --rc genhtml_branch_coverage=1 00:12:03.036 --rc genhtml_function_coverage=1 00:12:03.036 --rc genhtml_legend=1 00:12:03.036 --rc geninfo_all_blocks=1 00:12:03.036 --rc geninfo_unexecuted_blocks=1 00:12:03.036 00:12:03.036 ' 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:03.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.036 --rc genhtml_branch_coverage=1 00:12:03.036 --rc genhtml_function_coverage=1 00:12:03.036 --rc genhtml_legend=1 00:12:03.036 --rc geninfo_all_blocks=1 00:12:03.036 --rc geninfo_unexecuted_blocks=1 00:12:03.036 00:12:03.036 ' 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.036 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.037 10:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:11.183 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:11.183 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:11.183 Found net devices under 0000:31:00.0: cvl_0_0 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:11.183 Found net devices under 0000:31:00.1: cvl_0_1 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.183 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:12:11.184 00:12:11.184 --- 10.0.0.2 ping statistics --- 00:12:11.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.184 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:12:11.184 00:12:11.184 --- 10.0.0.1 ping statistics --- 00:12:11.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.184 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:11.184 only one NIC for nvmf test 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:11.184 rmmod nvme_tcp 00:12:11.184 rmmod nvme_fabrics 00:12:11.184 rmmod nvme_keyring 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.184 10:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:13.731 00:12:13.731 real 0m10.709s 00:12:13.731 user 0m2.305s 00:12:13.731 sys 0m6.318s 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:13.731 ************************************ 00:12:13.731 END TEST nvmf_target_multipath 00:12:13.731 ************************************ 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:13.731 ************************************ 00:12:13.731 START TEST nvmf_zcopy 00:12:13.731 ************************************ 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:13.731 * Looking for test storage... 00:12:13.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.731 10:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:13.731 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:13.731 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.731 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:13.731 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.731 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.731 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.732 --rc genhtml_branch_coverage=1 00:12:13.732 --rc genhtml_function_coverage=1 00:12:13.732 --rc genhtml_legend=1 00:12:13.732 --rc geninfo_all_blocks=1 00:12:13.732 --rc geninfo_unexecuted_blocks=1 00:12:13.732 00:12:13.732 ' 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.732 --rc genhtml_branch_coverage=1 00:12:13.732 --rc genhtml_function_coverage=1 00:12:13.732 --rc genhtml_legend=1 00:12:13.732 --rc geninfo_all_blocks=1 00:12:13.732 --rc geninfo_unexecuted_blocks=1 00:12:13.732 00:12:13.732 ' 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.732 --rc genhtml_branch_coverage=1 00:12:13.732 --rc genhtml_function_coverage=1 00:12:13.732 --rc genhtml_legend=1 00:12:13.732 --rc geninfo_all_blocks=1 00:12:13.732 --rc geninfo_unexecuted_blocks=1 00:12:13.732 00:12:13.732 ' 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.732 --rc genhtml_branch_coverage=1 00:12:13.732 --rc genhtml_function_coverage=1 00:12:13.732 --rc genhtml_legend=1 00:12:13.732 --rc geninfo_all_blocks=1 00:12:13.732 --rc geninfo_unexecuted_blocks=1 00:12:13.732 00:12:13.732 ' 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:12:13.732 10:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:21.873 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:21.873 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:21.873 Found net devices under 0000:31:00.0: cvl_0_0 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:21.873 Found net devices under 0000:31:00.1: cvl_0_1 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.873 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.874 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:12:22.135 00:12:22.135 --- 10.0.0.2 ping statistics --- 00:12:22.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.135 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:12:22.135 00:12:22.135 --- 10.0.0.1 ping statistics --- 00:12:22.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.135 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3727245 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3727245 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3727245 ']' 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:22.135 10:04:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.135 [2024-11-06 10:04:25.585332] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:22.135 [2024-11-06 10:04:25.585374] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.397 [2024-11-06 10:04:25.675971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.397 [2024-11-06 10:04:25.713407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.397 [2024-11-06 10:04:25.713443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.397 [2024-11-06 10:04:25.713451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.397 [2024-11-06 10:04:25.713458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.397 [2024-11-06 10:04:25.713465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.397 [2024-11-06 10:04:25.714152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.967 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:22.968 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:12:22.968 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.968 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:22.968 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.968 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.968 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:22.968 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:22.968 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.968 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:23.228 [2024-11-06 10:04:26.469355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:23.228 [2024-11-06 10:04:26.493648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:23.228 malloc0 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:23.228 { 00:12:23.228 "params": { 00:12:23.228 "name": "Nvme$subsystem", 00:12:23.228 "trtype": "$TEST_TRANSPORT", 00:12:23.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:23.228 "adrfam": "ipv4", 00:12:23.228 "trsvcid": "$NVMF_PORT", 00:12:23.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:23.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:23.228 "hdgst": ${hdgst:-false}, 00:12:23.228 "ddgst": ${ddgst:-false} 00:12:23.228 }, 00:12:23.228 "method": "bdev_nvme_attach_controller" 00:12:23.228 } 00:12:23.228 EOF 00:12:23.228 )") 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:23.228 10:04:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:23.228 "params": { 00:12:23.228 "name": "Nvme1", 00:12:23.228 "trtype": "tcp", 00:12:23.228 "traddr": "10.0.0.2", 00:12:23.228 "adrfam": "ipv4", 00:12:23.228 "trsvcid": "4420", 00:12:23.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:23.228 "hdgst": false, 00:12:23.228 "ddgst": false 00:12:23.228 }, 00:12:23.228 "method": "bdev_nvme_attach_controller" 00:12:23.228 }' 00:12:23.228 [2024-11-06 10:04:26.596417] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:23.228 [2024-11-06 10:04:26.596483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727564 ] 00:12:23.228 [2024-11-06 10:04:26.679080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.228 [2024-11-06 10:04:26.720462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.797 Running I/O for 10 seconds... 00:12:25.680 6680.00 IOPS, 52.19 MiB/s [2024-11-06T09:04:30.123Z] 7946.00 IOPS, 62.08 MiB/s [2024-11-06T09:04:31.067Z] 8542.00 IOPS, 66.73 MiB/s [2024-11-06T09:04:32.449Z] 8843.00 IOPS, 69.09 MiB/s [2024-11-06T09:04:33.020Z] 9027.60 IOPS, 70.53 MiB/s [2024-11-06T09:04:34.402Z] 9151.33 IOPS, 71.49 MiB/s [2024-11-06T09:04:35.343Z] 9239.86 IOPS, 72.19 MiB/s [2024-11-06T09:04:36.352Z] 9307.12 IOPS, 72.71 MiB/s [2024-11-06T09:04:37.333Z] 9358.78 IOPS, 73.12 MiB/s [2024-11-06T09:04:37.333Z] 9400.40 IOPS, 73.44 MiB/s 00:12:33.832 Latency(us) 00:12:33.832 [2024-11-06T09:04:37.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.832 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:33.832 Verification LBA range: start 0x0 length 0x1000 00:12:33.832 Nvme1n1 : 10.01 9401.43 73.45 0.00 0.00 13563.78 1358.51 25668.27 00:12:33.832 [2024-11-06T09:04:37.333Z] =================================================================================================================== 00:12:33.832 [2024-11-06T09:04:37.333Z] Total : 9401.43 73.45 0.00 0.00 13563.78 1358.51 25668.27 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3729629 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:33.832 { 00:12:33.832 "params": { 00:12:33.832 "name": "Nvme$subsystem", 00:12:33.832 "trtype": "$TEST_TRANSPORT", 00:12:33.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.832 "adrfam": "ipv4", 00:12:33.832 "trsvcid": "$NVMF_PORT", 00:12:33.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.832 "hdgst": ${hdgst:-false}, 00:12:33.832 "ddgst": ${ddgst:-false} 00:12:33.832 }, 00:12:33.832 "method": "bdev_nvme_attach_controller" 00:12:33.832 } 00:12:33.832 EOF 00:12:33.832 )") 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:33.832 [2024-11-06 10:04:37.149271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.149299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:33.832 10:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:33.832 "params": { 00:12:33.832 "name": "Nvme1", 00:12:33.832 "trtype": "tcp", 00:12:33.832 "traddr": "10.0.0.2", 00:12:33.832 "adrfam": "ipv4", 00:12:33.832 "trsvcid": "4420", 00:12:33.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.832 "hdgst": false, 00:12:33.832 "ddgst": false 00:12:33.832 }, 00:12:33.832 "method": "bdev_nvme_attach_controller" 00:12:33.832 }' 00:12:33.832 [2024-11-06 10:04:37.161274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.161284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.173302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.173309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.185333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.185341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.197365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.197372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.204487] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:33.832 [2024-11-06 10:04:37.204537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729629 ] 00:12:33.832 [2024-11-06 10:04:37.209395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.209405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.221425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.221433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.233457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.233465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.245488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.245496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.257518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.257526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.269548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.269556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.281348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.832 [2024-11-06 10:04:37.281580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.281587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.293610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.293619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.305642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.305651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.316788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.832 [2024-11-06 10:04:37.317673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.317682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.832 [2024-11-06 10:04:37.329709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.832 [2024-11-06 10:04:37.329718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.092 [2024-11-06 10:04:37.341740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.092 [2024-11-06 10:04:37.341753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.353768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.353779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.365798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.365808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.377828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.377836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.389873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.389892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.401895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.401905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.413925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.413936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.425955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.425965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.437985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.437993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.450017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.450025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.462047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.462054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.474089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.474099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.486111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.486119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.498142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.498150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.510171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.510179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.522204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.522214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.534233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.534242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.546264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.546272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.558296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.558304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 [2024-11-06 10:04:37.570345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.570361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.093 Running I/O for 5 seconds... 00:12:34.093 [2024-11-06 10:04:37.582371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.093 [2024-11-06 10:04:37.582380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.597147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.597165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.610406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.610422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.624117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.624133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.637754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.637770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.650613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.650628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.662834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.662849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.675971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.675987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.689485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.689501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.703302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.703317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.716359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.716375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.728784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.728799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.742032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.742047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.755555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.755570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.768720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.768735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.781110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.781125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.794688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.794703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.807606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.807626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.820844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.820861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.833626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.833643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.353 [2024-11-06 10:04:37.846786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.353 [2024-11-06 10:04:37.846803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:37.860347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:37.860363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:37.872921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:37.872937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:37.885368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:37.885384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:37.897941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:37.897957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:37.910841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:37.910856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:37.924294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:37.924310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:37.937666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:37.937682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:37.950394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:37.950410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:37.963138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:37.963153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:37.975384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:37.975399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:37.988334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:37.988349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:38.001836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:38.001851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:38.015153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:38.015169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:38.028223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.613 [2024-11-06 10:04:38.028238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.613 [2024-11-06 10:04:38.041100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.614 [2024-11-06 10:04:38.041116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.614 [2024-11-06 10:04:38.054530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.614 [2024-11-06 10:04:38.054550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.614 [2024-11-06 10:04:38.067049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.614 [2024-11-06 10:04:38.067064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.614 [2024-11-06 10:04:38.080421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.614 [2024-11-06 10:04:38.080437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.614 [2024-11-06 10:04:38.092761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.614 [2024-11-06 10:04:38.092776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.614 [2024-11-06 10:04:38.105807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.614 [2024-11-06 10:04:38.105824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.118388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.118404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.131912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.131928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.145107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.145123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.158104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.158120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.171535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.171550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.184884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.184900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.197404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.197420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.211045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.211060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.224363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.224378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.238053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.238068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.250439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.250454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.263345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.874 [2024-11-06 10:04:38.263361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.874 [2024-11-06 10:04:38.276475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.875 [2024-11-06 10:04:38.276491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.875 [2024-11-06 10:04:38.289348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.875 [2024-11-06 10:04:38.289363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.875 [2024-11-06 10:04:38.302205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.875 [2024-11-06 10:04:38.302225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.875 [2024-11-06 10:04:38.314605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.875 [2024-11-06 10:04:38.314620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.875 [2024-11-06 10:04:38.327099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.875 [2024-11-06 10:04:38.327114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.875 [2024-11-06 10:04:38.340504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.875 [2024-11-06 10:04:38.340519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.875 [2024-11-06 10:04:38.354049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.875 [2024-11-06 10:04:38.354064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.875 [2024-11-06 10:04:38.366337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.875 [2024-11-06 10:04:38.366352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.378893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.378909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.392057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.392073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.404668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.404684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.417308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.417324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.430565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.430581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.443751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.443765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.457336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.457351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.469945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.469960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.483665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.483724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.496983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.496998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.509631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.509646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.522949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.522964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.536454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.536470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.549812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.549835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.562405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.562420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 [2024-11-06 10:04:38.574991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.136 [2024-11-06 10:04:38.575006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.136 19133.00 IOPS, 149.48 MiB/s [2024-11-06T09:04:38.638Z] [2024-11-06 10:04:38.587915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.137 [2024-11-06 10:04:38.587930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.137 [2024-11-06 10:04:38.600760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.137 [2024-11-06 10:04:38.600775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.137 [2024-11-06 10:04:38.613601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.137 [2024-11-06 10:04:38.613616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.137 [2024-11-06 10:04:38.626657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.137 [2024-11-06 10:04:38.626672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.639419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.639435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.652142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.652157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.665260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.665275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.678632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.678647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.691914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.691929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.705265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.705280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.718440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.718455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.731713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.731728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.744418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.744433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.757813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.757827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.771394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.771409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.784707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.784722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.798089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.798104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.811229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.811245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.824637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.824652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.837469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.837485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.850527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.850542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.864018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.864032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.877153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.877168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.397 [2024-11-06 10:04:38.889601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.397 [2024-11-06 10:04:38.889616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:38.902365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:38.902381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:38.914830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:38.914845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:38.927239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:38.927254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:38.940069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:38.940084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:38.953200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:38.953215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:38.966239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:38.966254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:38.979146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:38.979160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:38.992452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:38.992467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.004934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.004948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.018576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.018591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.032038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.032053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.045174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.045188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.058489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.058504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.071928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.071943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.084540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.084555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.097469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.097484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.110084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.110099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.123081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.123096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.136636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.136651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.658 [2024-11-06 10:04:39.149181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.658 [2024-11-06 10:04:39.149196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.162647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.162662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.175870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.175886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.189488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.189503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.202093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.202108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.215032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.215047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.228083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.228097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.240368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.240383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.253175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.253190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.266510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.266524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.279319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.279334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.292517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.292532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.306130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.306144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.319133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.319148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.331826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.331841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.344692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.344707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.357915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.357930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.371492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.371507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.384298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.384313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.397621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.397636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.919 [2024-11-06 10:04:39.410799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.919 [2024-11-06 10:04:39.410814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.423280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.423295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.435847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.435866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.449146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.449161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.462650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.462665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.475555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.475570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.488360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.488375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.501077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.501093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.514903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.514919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.527734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.527754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.541324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.541340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.554660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.554675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.567737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.567753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.580456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.580471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 19245.00 IOPS, 150.35 MiB/s [2024-11-06T09:04:39.681Z] [2024-11-06 10:04:39.593276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.593292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.606206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.606223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.619505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.619521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.632886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.632901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.645942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.645958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.658367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.658382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.180 [2024-11-06 10:04:39.671945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.180 [2024-11-06 10:04:39.671961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.684637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.684653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.697590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.697606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.710921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.710937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.723938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.723953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.737366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.737382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.749874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.749890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.762578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.762594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.775840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.775859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.788672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.788687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.802076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.802091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.815509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.815525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.829107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.829122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.842362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.842377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.856054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.856069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.869661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.869676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.882849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.882869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.895686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.895701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.909011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.909026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.922376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.922392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.935802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.935817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.948064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.948079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.468 [2024-11-06 10:04:39.960664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.468 [2024-11-06 10:04:39.960679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.729 [2024-11-06 10:04:39.973264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.729 [2024-11-06 10:04:39.973280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.729 [2024-11-06 10:04:39.986499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.729 [2024-11-06 10:04:39.986514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.729 [2024-11-06 10:04:39.999598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.729 [2024-11-06 10:04:39.999613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.729 [2024-11-06 10:04:40.012664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.729 [2024-11-06 10:04:40.012681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.729 [2024-11-06 10:04:40.025224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.729 [2024-11-06 10:04:40.025244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.729 [2024-11-06 10:04:40.039030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.729 [2024-11-06 10:04:40.039047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.729 [2024-11-06 10:04:40.052257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.052273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.065589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.065604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.078715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.078731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.091751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.091766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.105098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.105113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.117706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.117722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.130327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.130343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.143744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.143759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.156513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.156529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.169233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.169250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.177164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.177179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.186010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.186024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.194682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.194697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.207450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.207465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.730 [2024-11-06 10:04:40.221027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.730 [2024-11-06 10:04:40.221042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.989 [2024-11-06 10:04:40.233811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.989 [2024-11-06 10:04:40.233827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.989 [2024-11-06 10:04:40.247028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.989 [2024-11-06 10:04:40.247043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.989 [2024-11-06 10:04:40.259501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.989 [2024-11-06 10:04:40.259516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.989 [2024-11-06 10:04:40.272915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.989 [2024-11-06 10:04:40.272930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.989 [2024-11-06 10:04:40.286450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.989 [2024-11-06 10:04:40.286466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.989 [2024-11-06 10:04:40.299644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.989 [2024-11-06 10:04:40.299660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.989 [2024-11-06 10:04:40.312482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.989 [2024-11-06 10:04:40.312497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.989 [2024-11-06 10:04:40.326139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.989 [2024-11-06 10:04:40.326154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.989 [2024-11-06 10:04:40.339425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.989 [2024-11-06 10:04:40.339439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.989 [2024-11-06 10:04:40.352477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.989 [2024-11-06 10:04:40.352492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.990 [2024-11-06 10:04:40.365316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.990 [2024-11-06 10:04:40.365331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.990 [2024-11-06 10:04:40.377887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.990 [2024-11-06 10:04:40.377902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.990 [2024-11-06 10:04:40.391029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.990 [2024-11-06 10:04:40.391044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.990 [2024-11-06 10:04:40.404509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.990 [2024-11-06 10:04:40.404524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.990 [2024-11-06 10:04:40.417987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.990 [2024-11-06 10:04:40.418002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.990 [2024-11-06 10:04:40.431046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.990 [2024-11-06 10:04:40.431062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.990 [2024-11-06 10:04:40.444329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.990 [2024-11-06 10:04:40.444344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.990 [2024-11-06 10:04:40.457821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.990 [2024-11-06 10:04:40.457836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.990 [2024-11-06 10:04:40.471360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.990 [2024-11-06 10:04:40.471375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.990 [2024-11-06 10:04:40.484382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.990 [2024-11-06 10:04:40.484397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.497956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.497971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.511262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.511278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.523886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.523901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.536745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.536759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.550290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.550307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.563524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.563539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.575909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.575924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.589212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.589227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 19270.67 IOPS, 150.55 MiB/s [2024-11-06T09:04:40.750Z] [2024-11-06 10:04:40.602569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.602584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.615816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.615832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.628608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.628623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.641007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.641022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.654097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.654112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.666722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.249 [2024-11-06 10:04:40.666737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.249 [2024-11-06 10:04:40.680151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.250 [2024-11-06 10:04:40.680166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.250 [2024-11-06 10:04:40.692616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.250 [2024-11-06 10:04:40.692631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.250 [2024-11-06 10:04:40.705167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.250 [2024-11-06 10:04:40.705182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.250 [2024-11-06 10:04:40.717385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.250 [2024-11-06 10:04:40.717401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.250 [2024-11-06 10:04:40.730696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.250 [2024-11-06 10:04:40.730711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.250 [2024-11-06 10:04:40.744064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.250 [2024-11-06 10:04:40.744082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.757504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.757519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.770654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.770669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.783535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.783550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.796447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.796461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.809682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.809697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.822381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.822397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.834927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.834942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.848213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.848228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.861671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.861686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.874364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.874379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.886965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.886980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.899412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.899427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.912195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.912210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.925913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.925927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.939413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.939428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.953104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.953120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.965584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.965599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.978934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.978949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:40.992172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:40.992190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.510 [2024-11-06 10:04:41.005617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.510 [2024-11-06 10:04:41.005632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.018618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.018634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.032035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.032050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.044670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.044685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.058404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.058419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.071666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.071681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.084588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.084602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.097412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.097426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.110087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.110102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.122673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.122688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.136086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.136101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.148769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.148785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.161457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.161473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.174831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.174847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.187747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.187763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.200973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.200989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.214618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.214634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.227894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.227910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.240372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.240393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.253947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.253962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-11-06 10:04:41.267123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-11-06 10:04:41.267139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.280572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.280588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.293471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.293487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.305962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.305977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.318433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.318448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.331968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.331984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.344974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.344990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.357343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.357358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.370765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.370781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.384208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.384223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.397575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.397590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.410255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.410270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.422549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-11-06 10:04:41.422564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-11-06 10:04:41.435820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-11-06 10:04:41.435836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-11-06 10:04:41.448091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-11-06 10:04:41.448106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-11-06 10:04:41.460745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-11-06 10:04:41.460760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-11-06 10:04:41.473764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-11-06 10:04:41.473779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-11-06 10:04:41.487032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-11-06 10:04:41.487052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-11-06 10:04:41.500332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-11-06 10:04:41.500348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-11-06 10:04:41.513242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-11-06 10:04:41.513257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-11-06 10:04:41.526115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-11-06 10:04:41.526131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.539946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.539962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.552987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.553003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.565765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.565780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.578938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.578953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 19277.50 IOPS, 150.61 MiB/s [2024-11-06T09:04:41.796Z] [2024-11-06 10:04:41.592319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.592335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.604487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.604503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.617512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.617527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.630858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.630878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.643439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.643455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.656133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.656149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.668789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.668804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.682210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.682225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.695682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.695698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.709026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.709042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.722697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.722713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.736115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.736130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.749416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.749432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.762849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.762870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.775465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.775480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.295 [2024-11-06 10:04:41.788217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.295 [2024-11-06 10:04:41.788232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.800881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.800897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.814069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.814085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.827292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.827308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.840280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.840296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.852874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.852890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.865358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.865374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.878384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.878399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.891421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.891436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.904015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.904030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.917187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.917202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.930500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.930514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.943073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.943087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.956456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.956471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.968928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.968942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.982159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.982174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:41.995568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:41.995583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:42.008923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:42.008938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:42.021337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:42.021352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:42.034335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:42.034350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.556 [2024-11-06 10:04:42.047523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.556 [2024-11-06 10:04:42.047538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.059811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.059826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.073099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.073114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.085993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.086008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.099788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.099803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.112876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.112890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.125776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.125790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.139130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.139145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.152663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.152678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.165470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.165485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.179075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.179091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.192183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.192198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.205826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.205841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.218240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.218255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.230749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.230764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.244199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.816 [2024-11-06 10:04:42.244214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.816 [2024-11-06 10:04:42.257540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.817 [2024-11-06 10:04:42.257555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.817 [2024-11-06 10:04:42.270435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.817 [2024-11-06 10:04:42.270450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.817 [2024-11-06 10:04:42.282876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.817 [2024-11-06 10:04:42.282891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.817 [2024-11-06 10:04:42.295983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.817 [2024-11-06 10:04:42.295999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.817 [2024-11-06 10:04:42.309242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.817 [2024-11-06 10:04:42.309256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.323000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.323015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.335676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.335691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.348266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.348282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.360722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.360738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.373385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.373400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.385701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.385716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.399071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.399086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.412431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.412445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.425851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.425869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.438407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.438422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.451448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.451463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.464781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.464799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.478155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.478171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.491720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.491735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.504925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.504940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.517679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.517694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.530404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.530419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.543929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.543944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.556339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.556353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.077 [2024-11-06 10:04:42.569290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.077 [2024-11-06 10:04:42.569305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-11-06 10:04:42.582789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.582804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 19282.80 IOPS, 150.65 MiB/s [2024-11-06T09:04:42.838Z] [2024-11-06 10:04:42.594868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.594884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 00:12:39.337 Latency(us) 00:12:39.337 [2024-11-06T09:04:42.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.337 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:39.337 Nvme1n1 : 5.01 19286.12 150.67 0.00 0.00 6630.45 2921.81 15619.41 00:12:39.337 [2024-11-06T09:04:42.838Z] =================================================================================================================== 00:12:39.337 [2024-11-06T09:04:42.838Z] Total : 19286.12 150.67 0.00 0.00 6630.45 2921.81 15619.41 00:12:39.337 [2024-11-06 10:04:42.604804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.604819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-11-06 10:04:42.616832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.616845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-11-06 10:04:42.628869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.628882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-11-06 10:04:42.640898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.640910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-11-06 10:04:42.652926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.652937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-11-06 10:04:42.664954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.664971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-11-06 10:04:42.676986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.676995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-11-06 10:04:42.689018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.689029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-11-06 10:04:42.701047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.701057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-11-06 10:04:42.713076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-11-06 10:04:42.713085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3729629) - No such process 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3729629 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:39.337 delay0 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.337 10:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:39.337 [2024-11-06 10:04:42.822846] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:47.473 Initializing NVMe Controllers 00:12:47.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:47.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:47.474 Initialization complete. Launching workers. 00:12:47.474 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 241, failed: 29998 00:12:47.474 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 30109, failed to submit 130 00:12:47.474 success 30035, unsuccessful 74, failed 0 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:47.474 rmmod nvme_tcp 00:12:47.474 rmmod nvme_fabrics 00:12:47.474 rmmod nvme_keyring 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3727245 ']' 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3727245 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3727245 ']' 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3727245 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:47.474 10:04:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3727245 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3727245' 00:12:47.474 killing process with pid 3727245 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3727245 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3727245 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.474 10:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.858 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:48.858 00:12:48.858 real 0m35.398s 00:12:48.858 user 0m45.805s 00:12:48.858 sys 0m12.264s 00:12:48.858 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:48.858 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:48.858 ************************************ 00:12:48.858 END TEST nvmf_zcopy 00:12:48.858 ************************************ 00:12:48.858 10:04:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:48.859 10:04:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:48.859 10:04:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:48.859 10:04:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:48.859 ************************************ 00:12:48.859 START TEST nvmf_nmic 00:12:48.859 ************************************ 00:12:48.859 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:49.120 * Looking for test storage... 00:12:49.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:49.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.120 --rc genhtml_branch_coverage=1 00:12:49.120 --rc genhtml_function_coverage=1 00:12:49.120 --rc genhtml_legend=1 00:12:49.120 --rc geninfo_all_blocks=1 00:12:49.120 --rc geninfo_unexecuted_blocks=1 00:12:49.120 00:12:49.120 ' 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:49.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.120 --rc genhtml_branch_coverage=1 00:12:49.120 --rc genhtml_function_coverage=1 00:12:49.120 --rc genhtml_legend=1 00:12:49.120 --rc geninfo_all_blocks=1 00:12:49.120 --rc geninfo_unexecuted_blocks=1 00:12:49.120 00:12:49.120 ' 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:49.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.120 --rc genhtml_branch_coverage=1 00:12:49.120 --rc genhtml_function_coverage=1 00:12:49.120 --rc genhtml_legend=1 00:12:49.120 --rc geninfo_all_blocks=1 00:12:49.120 --rc geninfo_unexecuted_blocks=1 00:12:49.120 00:12:49.120 ' 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:49.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.120 --rc genhtml_branch_coverage=1 00:12:49.120 --rc genhtml_function_coverage=1 00:12:49.120 --rc genhtml_legend=1 00:12:49.120 --rc geninfo_all_blocks=1 00:12:49.120 --rc geninfo_unexecuted_blocks=1 00:12:49.120 00:12:49.120 ' 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.120 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:12:49.121 10:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.262 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:57.262 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:57.263 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:57.263 Found net devices under 0000:31:00.0: cvl_0_0 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:57.263 Found net devices under 0000:31:00.1: cvl_0_1 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.263 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.524 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.524 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.524 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:57.524 10:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.524 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.524 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.524 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:57.524 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:57.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:12:57.785 00:12:57.785 --- 10.0.0.2 ping statistics --- 00:12:57.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.785 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:12:57.785 00:12:57.785 --- 10.0.0.1 ping statistics --- 00:12:57.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.785 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3736994 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3736994 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3736994 ']' 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:57.785 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:57.786 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.786 [2024-11-06 10:05:01.116508] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:57.786 [2024-11-06 10:05:01.116564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.786 [2024-11-06 10:05:01.194329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.786 [2024-11-06 10:05:01.233098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.786 [2024-11-06 10:05:01.233136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.786 [2024-11-06 10:05:01.233144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.786 [2024-11-06 10:05:01.233152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.786 [2024-11-06 10:05:01.233158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.786 [2024-11-06 10:05:01.234893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.786 [2024-11-06 10:05:01.234965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.786 [2024-11-06 10:05:01.235275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.786 [2024-11-06 10:05:01.235276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:58.047 [2024-11-06 10:05:01.371507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:58.047 Malloc0 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:58.047 [2024-11-06 10:05:01.440283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:58.047 test case1: single bdev can't be used in multiple subsystems 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:58.047 [2024-11-06 10:05:01.476192] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:58.047 [2024-11-06 10:05:01.476213] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:58.047 [2024-11-06 10:05:01.476221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.047 request: 00:12:58.047 { 00:12:58.047 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:58.047 "namespace": { 00:12:58.047 "bdev_name": "Malloc0", 00:12:58.047 "no_auto_visible": false 00:12:58.047 }, 00:12:58.047 "method": "nvmf_subsystem_add_ns", 00:12:58.047 "req_id": 1 00:12:58.047 } 00:12:58.047 Got JSON-RPC error response 00:12:58.047 response: 00:12:58.047 { 00:12:58.047 "code": -32602, 00:12:58.047 "message": "Invalid parameters" 00:12:58.047 } 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:58.047 Adding namespace failed - expected result. 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:58.047 test case2: host connect to nvmf target in multiple paths 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:58.047 [2024-11-06 10:05:01.488363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.047 10:05:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.959 10:05:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:01.345 10:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.345 10:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:13:01.345 10:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.345 10:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:01.345 10:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:13:03.285 10:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:03.285 10:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:03.285 10:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.285 10:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:03.285 10:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.285 10:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:13:03.285 10:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:03.285 [global] 00:13:03.285 thread=1 00:13:03.285 invalidate=1 00:13:03.285 rw=write 00:13:03.285 time_based=1 00:13:03.285 runtime=1 00:13:03.285 ioengine=libaio 00:13:03.285 direct=1 00:13:03.285 bs=4096 00:13:03.285 iodepth=1 00:13:03.285 norandommap=0 00:13:03.285 numjobs=1 00:13:03.285 00:13:03.285 verify_dump=1 00:13:03.285 verify_backlog=512 00:13:03.285 verify_state_save=0 00:13:03.285 do_verify=1 00:13:03.285 verify=crc32c-intel 00:13:03.285 [job0] 00:13:03.285 filename=/dev/nvme0n1 00:13:03.285 Could not set queue depth (nvme0n1) 00:13:03.549 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:03.549 fio-3.35 00:13:03.549 Starting 1 thread 00:13:04.930 00:13:04.930 job0: (groupid=0, jobs=1): err= 0: pid=3738233: Wed Nov 6 10:05:08 2024 00:13:04.930 read: IOPS=565, BW=2262KiB/s (2316kB/s)(2264KiB/1001msec) 00:13:04.930 slat (nsec): min=6626, max=59719, avg=22694.25, stdev=6995.97 00:13:04.930 clat (usec): min=360, max=924, avg=732.68, stdev=92.42 00:13:04.930 lat (usec): min=367, max=949, avg=755.38, stdev=94.55 00:13:04.930 clat percentiles (usec): 00:13:04.930 | 1.00th=[ 474], 5.00th=[ 562], 10.00th=[ 594], 20.00th=[ 660], 00:13:04.930 | 30.00th=[ 693], 40.00th=[ 717], 50.00th=[ 758], 60.00th=[ 783], 00:13:04.930 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 832], 95.00th=[ 848], 00:13:04.930 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 922], 99.95th=[ 922], 00:13:04.930 | 99.99th=[ 922] 00:13:04.930 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:04.930 slat (nsec): min=9432, max=65739, avg=29923.46, stdev=8703.90 00:13:04.930 clat (usec): min=163, max=753, avg=517.88, stdev=105.77 00:13:04.930 lat (usec): min=174, max=803, avg=547.80, stdev=109.97 00:13:04.930 clat percentiles (usec): 00:13:04.930 | 1.00th=[ 243], 5.00th=[ 322], 10.00th=[ 379], 20.00th=[ 424], 00:13:04.930 | 30.00th=[ 474], 40.00th=[ 498], 50.00th=[ 523], 60.00th=[ 570], 00:13:04.930 | 70.00th=[ 594], 80.00th=[ 611], 90.00th=[ 635], 95.00th=[ 660], 00:13:04.930 | 99.00th=[ 701], 99.50th=[ 717], 99.90th=[ 742], 99.95th=[ 758], 00:13:04.930 | 99.99th=[ 758] 00:13:04.930 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:04.930 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:04.930 lat (usec) : 250=0.82%, 500=26.29%, 750=54.53%, 1000=18.36% 00:13:04.930 cpu : usr=2.70%, sys=4.10%, ctx=1591, majf=0, minf=1 00:13:04.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:04.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.930 issued rwts: total=566,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:04.930 00:13:04.930 Run status group 0 (all jobs): 00:13:04.930 READ: bw=2262KiB/s (2316kB/s), 2262KiB/s-2262KiB/s (2316kB/s-2316kB/s), io=2264KiB (2318kB), run=1001-1001msec 00:13:04.930 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:13:04.930 00:13:04.930 Disk stats (read/write): 00:13:04.930 nvme0n1: ios=562/914, merge=0/0, ticks=611/426, in_queue=1037, util=96.79% 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.930 rmmod nvme_tcp 00:13:04.930 rmmod nvme_fabrics 00:13:04.930 rmmod nvme_keyring 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3736994 ']' 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3736994 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3736994 ']' 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3736994 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3736994 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3736994' 00:13:04.930 killing process with pid 3736994 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3736994 00:13:04.930 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3736994 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.190 10:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.101 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.101 00:13:07.101 real 0m18.302s 00:13:07.101 user 0m49.100s 00:13:07.101 sys 0m7.288s 00:13:07.101 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:07.101 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.101 ************************************ 00:13:07.101 END TEST nvmf_nmic 00:13:07.101 ************************************ 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:07.362 ************************************ 00:13:07.362 START TEST nvmf_fio_target 00:13:07.362 ************************************ 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:07.362 * Looking for test storage... 00:13:07.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.362 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:07.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.624 --rc genhtml_branch_coverage=1 00:13:07.624 --rc genhtml_function_coverage=1 00:13:07.624 --rc genhtml_legend=1 00:13:07.624 --rc geninfo_all_blocks=1 00:13:07.624 --rc geninfo_unexecuted_blocks=1 00:13:07.624 00:13:07.624 ' 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:07.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.624 --rc genhtml_branch_coverage=1 00:13:07.624 --rc genhtml_function_coverage=1 00:13:07.624 --rc genhtml_legend=1 00:13:07.624 --rc geninfo_all_blocks=1 00:13:07.624 --rc geninfo_unexecuted_blocks=1 00:13:07.624 00:13:07.624 ' 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:07.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.624 --rc genhtml_branch_coverage=1 00:13:07.624 --rc genhtml_function_coverage=1 00:13:07.624 --rc genhtml_legend=1 00:13:07.624 --rc geninfo_all_blocks=1 00:13:07.624 --rc geninfo_unexecuted_blocks=1 00:13:07.624 00:13:07.624 ' 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:07.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.624 --rc genhtml_branch_coverage=1 00:13:07.624 --rc genhtml_function_coverage=1 00:13:07.624 --rc genhtml_legend=1 00:13:07.624 --rc geninfo_all_blocks=1 00:13:07.624 --rc geninfo_unexecuted_blocks=1 00:13:07.624 00:13:07.624 ' 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.624 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.625 10:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.761 10:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.761 10:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.761 10:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.761 10:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.761 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.761 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.761 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.761 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.761 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.761 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:13:15.761 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.761 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:13:15.761 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.761 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:13:15.761 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:15.762 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:15.762 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:15.762 Found net devices under 0000:31:00.0: cvl_0_0 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:15.762 Found net devices under 0000:31:00.1: cvl_0_1 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.762 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.023 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:16.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:13:16.024 00:13:16.024 --- 10.0.0.2 ping statistics --- 00:13:16.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.024 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:13:16.024 00:13:16.024 --- 10.0.0.1 ping statistics --- 00:13:16.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.024 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3743435 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3743435 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3743435 ']' 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:16.024 10:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.024 [2024-11-06 10:05:19.473403] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:16.024 [2024-11-06 10:05:19.473466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.285 [2024-11-06 10:05:19.567709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.285 [2024-11-06 10:05:19.609800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.285 [2024-11-06 10:05:19.609840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.285 [2024-11-06 10:05:19.609848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.285 [2024-11-06 10:05:19.609855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.285 [2024-11-06 10:05:19.609866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.285 [2024-11-06 10:05:19.611748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.285 [2024-11-06 10:05:19.611873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.285 [2024-11-06 10:05:19.611909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.285 [2024-11-06 10:05:19.611928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.855 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:16.855 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:13:16.855 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:16.855 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:16.855 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.855 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.855 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:17.114 [2024-11-06 10:05:20.475656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.114 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.375 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:17.375 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.635 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:17.635 10:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.635 10:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:17.635 10:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.894 10:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:17.894 10:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:18.155 10:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.414 10:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:18.414 10:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.414 10:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:18.414 10:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.673 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:18.673 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:18.934 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:19.196 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:19.196 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:19.196 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:19.196 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:19.457 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.717 [2024-11-06 10:05:22.968406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.717 10:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:19.717 10:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:19.977 10:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.381 10:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:21.381 10:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:13:21.381 10:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.381 10:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:13:21.381 10:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:13:21.381 10:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:13:23.926 10:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:23.926 10:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:23.926 10:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.926 10:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:13:23.926 10:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.926 10:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:13:23.926 10:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:23.926 [global] 00:13:23.926 thread=1 00:13:23.926 invalidate=1 00:13:23.926 rw=write 00:13:23.926 time_based=1 00:13:23.926 runtime=1 00:13:23.926 ioengine=libaio 00:13:23.926 direct=1 00:13:23.926 bs=4096 00:13:23.926 iodepth=1 00:13:23.926 norandommap=0 00:13:23.926 numjobs=1 00:13:23.926 00:13:23.926 verify_dump=1 00:13:23.926 verify_backlog=512 00:13:23.926 verify_state_save=0 00:13:23.926 do_verify=1 00:13:23.926 verify=crc32c-intel 00:13:23.926 [job0] 00:13:23.926 filename=/dev/nvme0n1 00:13:23.926 [job1] 00:13:23.926 filename=/dev/nvme0n2 00:13:23.926 [job2] 00:13:23.926 filename=/dev/nvme0n3 00:13:23.926 [job3] 00:13:23.926 filename=/dev/nvme0n4 00:13:23.926 Could not set queue depth (nvme0n1) 00:13:23.926 Could not set queue depth (nvme0n2) 00:13:23.926 Could not set queue depth (nvme0n3) 00:13:23.926 Could not set queue depth (nvme0n4) 00:13:23.926 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.926 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.926 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.926 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.927 fio-3.35 00:13:23.927 Starting 4 threads 00:13:25.312 00:13:25.312 job0: (groupid=0, jobs=1): err= 0: pid=3745167: Wed Nov 6 10:05:28 2024 00:13:25.312 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:25.312 slat (nsec): min=26257, max=46363, avg=27336.12, stdev=2821.45 00:13:25.312 clat (usec): min=630, max=1300, avg=1041.92, stdev=80.90 00:13:25.312 lat (usec): min=657, max=1326, avg=1069.25, stdev=80.99 00:13:25.312 clat percentiles (usec): 00:13:25.312 | 1.00th=[ 807], 5.00th=[ 889], 10.00th=[ 930], 20.00th=[ 988], 00:13:25.312 | 30.00th=[ 1020], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1057], 00:13:25.312 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1156], 00:13:25.312 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1303], 99.95th=[ 1303], 00:13:25.312 | 99.99th=[ 1303] 00:13:25.312 write: IOPS=681, BW=2725KiB/s (2791kB/s)(2728KiB/1001msec); 0 zone resets 00:13:25.312 slat (nsec): min=9125, max=69594, avg=29255.80, stdev=10322.44 00:13:25.312 clat (usec): min=201, max=1028, avg=621.51, stdev=125.81 00:13:25.312 lat (usec): min=230, max=1063, avg=650.77, stdev=129.87 00:13:25.312 clat percentiles (usec): 00:13:25.312 | 1.00th=[ 281], 5.00th=[ 404], 10.00th=[ 461], 20.00th=[ 519], 00:13:25.312 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 668], 00:13:25.312 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 799], 00:13:25.312 | 99.00th=[ 889], 99.50th=[ 947], 99.90th=[ 1029], 99.95th=[ 1029], 00:13:25.312 | 99.99th=[ 1029] 00:13:25.312 bw ( KiB/s): min= 4096, max= 4096, per=34.26%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.312 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.312 lat (usec) : 250=0.42%, 500=9.72%, 750=39.11%, 1000=17.92% 00:13:25.312 lat (msec) : 2=32.83% 00:13:25.312 cpu : usr=2.80%, sys=4.10%, ctx=1194, majf=0, minf=1 00:13:25.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.312 issued rwts: total=512,682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.312 job1: (groupid=0, jobs=1): err= 0: pid=3745169: Wed Nov 6 10:05:28 2024 00:13:25.312 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:25.312 slat (nsec): min=24454, max=43953, avg=25607.19, stdev=2483.81 00:13:25.312 clat (usec): min=808, max=1378, avg=1107.32, stdev=101.43 00:13:25.312 lat (usec): min=833, max=1403, avg=1132.93, stdev=101.19 00:13:25.312 clat percentiles (usec): 00:13:25.312 | 1.00th=[ 840], 5.00th=[ 922], 10.00th=[ 971], 20.00th=[ 1029], 00:13:25.312 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1139], 00:13:25.312 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1270], 00:13:25.312 | 99.00th=[ 1319], 99.50th=[ 1336], 99.90th=[ 1385], 99.95th=[ 1385], 00:13:25.312 | 99.99th=[ 1385] 00:13:25.312 write: IOPS=646, BW=2585KiB/s (2647kB/s)(2588KiB/1001msec); 0 zone resets 00:13:25.312 slat (nsec): min=9611, max=51818, avg=30342.97, stdev=7614.71 00:13:25.312 clat (usec): min=234, max=878, avg=604.85, stdev=119.67 00:13:25.312 lat (usec): min=245, max=928, avg=635.19, stdev=122.32 00:13:25.312 clat percentiles (usec): 00:13:25.312 | 1.00th=[ 318], 5.00th=[ 388], 10.00th=[ 441], 20.00th=[ 494], 00:13:25.312 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:13:25.312 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 783], 00:13:25.312 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 881], 99.95th=[ 881], 00:13:25.312 | 99.99th=[ 881] 00:13:25.312 bw ( KiB/s): min= 4096, max= 4096, per=34.26%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.312 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.312 lat (usec) : 250=0.09%, 500=11.99%, 750=37.70%, 1000=12.08% 00:13:25.312 lat (msec) : 2=38.14% 00:13:25.312 cpu : usr=2.30%, sys=2.90%, ctx=1159, majf=0, minf=1 00:13:25.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.312 issued rwts: total=512,647,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.312 job2: (groupid=0, jobs=1): err= 0: pid=3745170: Wed Nov 6 10:05:28 2024 00:13:25.312 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:25.312 slat (nsec): min=8048, max=48871, avg=28460.04, stdev=3934.59 00:13:25.312 clat (usec): min=719, max=1271, avg=1073.11, stdev=83.46 00:13:25.312 lat (usec): min=729, max=1302, avg=1101.57, stdev=84.27 00:13:25.312 clat percentiles (usec): 00:13:25.312 | 1.00th=[ 816], 5.00th=[ 922], 10.00th=[ 979], 20.00th=[ 1012], 00:13:25.312 | 30.00th=[ 1045], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1090], 00:13:25.312 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:13:25.312 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:13:25.312 | 99.99th=[ 1270] 00:13:25.312 write: IOPS=638, BW=2553KiB/s (2615kB/s)(2556KiB/1001msec); 0 zone resets 00:13:25.312 slat (usec): min=9, max=1784, avg=34.26, stdev=70.36 00:13:25.312 clat (usec): min=257, max=1037, avg=634.58, stdev=145.39 00:13:25.312 lat (usec): min=267, max=2571, avg=668.84, stdev=168.82 00:13:25.312 clat percentiles (usec): 00:13:25.312 | 1.00th=[ 281], 5.00th=[ 347], 10.00th=[ 433], 20.00th=[ 506], 00:13:25.312 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 652], 60.00th=[ 676], 00:13:25.312 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 799], 95.00th=[ 857], 00:13:25.312 | 99.00th=[ 947], 99.50th=[ 955], 99.90th=[ 1037], 99.95th=[ 1037], 00:13:25.312 | 99.99th=[ 1037] 00:13:25.312 bw ( KiB/s): min= 4096, max= 4096, per=34.26%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.312 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.312 lat (usec) : 500=10.60%, 750=32.67%, 1000=19.64% 00:13:25.312 lat (msec) : 2=37.10% 00:13:25.312 cpu : usr=1.60%, sys=5.30%, ctx=1155, majf=0, minf=1 00:13:25.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.312 issued rwts: total=512,639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.312 job3: (groupid=0, jobs=1): err= 0: pid=3745171: Wed Nov 6 10:05:28 2024 00:13:25.312 read: IOPS=676, BW=2705KiB/s (2770kB/s)(2708KiB/1001msec) 00:13:25.312 slat (nsec): min=6318, max=48926, avg=26216.44, stdev=6405.45 00:13:25.312 clat (usec): min=321, max=1533, avg=787.24, stdev=111.86 00:13:25.312 lat (usec): min=349, max=1560, avg=813.46, stdev=112.97 00:13:25.312 clat percentiles (usec): 00:13:25.312 | 1.00th=[ 441], 5.00th=[ 594], 10.00th=[ 635], 20.00th=[ 701], 00:13:25.312 | 30.00th=[ 742], 40.00th=[ 775], 50.00th=[ 799], 60.00th=[ 824], 00:13:25.312 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 938], 00:13:25.312 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1532], 99.95th=[ 1532], 00:13:25.312 | 99.99th=[ 1532] 00:13:25.312 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:25.312 slat (nsec): min=9156, max=56016, avg=30849.98, stdev=9791.84 00:13:25.312 clat (usec): min=118, max=2948, avg=395.39, stdev=137.83 00:13:25.312 lat (usec): min=129, max=2986, avg=426.24, stdev=140.67 00:13:25.312 clat percentiles (usec): 00:13:25.312 | 1.00th=[ 129], 5.00th=[ 215], 10.00th=[ 269], 20.00th=[ 306], 00:13:25.312 | 30.00th=[ 326], 40.00th=[ 347], 50.00th=[ 379], 60.00th=[ 424], 00:13:25.312 | 70.00th=[ 457], 80.00th=[ 486], 90.00th=[ 553], 95.00th=[ 594], 00:13:25.312 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 717], 99.95th=[ 2933], 00:13:25.312 | 99.99th=[ 2933] 00:13:25.312 bw ( KiB/s): min= 4096, max= 4096, per=34.26%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.312 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.312 lat (usec) : 250=5.17%, 500=45.15%, 750=22.34%, 1000=27.04% 00:13:25.312 lat (msec) : 2=0.24%, 4=0.06% 00:13:25.312 cpu : usr=3.60%, sys=6.50%, ctx=1701, majf=0, minf=1 00:13:25.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.312 issued rwts: total=677,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.312 00:13:25.312 Run status group 0 (all jobs): 00:13:25.312 READ: bw=8843KiB/s (9055kB/s), 2046KiB/s-2705KiB/s (2095kB/s-2770kB/s), io=8852KiB (9064kB), run=1001-1001msec 00:13:25.312 WRITE: bw=11.7MiB/s (12.2MB/s), 2553KiB/s-4092KiB/s (2615kB/s-4190kB/s), io=11.7MiB (12.3MB), run=1001-1001msec 00:13:25.312 00:13:25.312 Disk stats (read/write): 00:13:25.312 nvme0n1: ios=519/512, merge=0/0, ticks=503/268, in_queue=771, util=88.08% 00:13:25.312 nvme0n2: ios=486/512, merge=0/0, ticks=532/299, in_queue=831, util=88.38% 00:13:25.312 nvme0n3: ios=505/512, merge=0/0, ticks=682/257, in_queue=939, util=98.22% 00:13:25.312 nvme0n4: ios=512/961, merge=0/0, ticks=337/271, in_queue=608, util=89.51% 00:13:25.312 10:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:25.313 [global] 00:13:25.313 thread=1 00:13:25.313 invalidate=1 00:13:25.313 rw=randwrite 00:13:25.313 time_based=1 00:13:25.313 runtime=1 00:13:25.313 ioengine=libaio 00:13:25.313 direct=1 00:13:25.313 bs=4096 00:13:25.313 iodepth=1 00:13:25.313 norandommap=0 00:13:25.313 numjobs=1 00:13:25.313 00:13:25.313 verify_dump=1 00:13:25.313 verify_backlog=512 00:13:25.313 verify_state_save=0 00:13:25.313 do_verify=1 00:13:25.313 verify=crc32c-intel 00:13:25.313 [job0] 00:13:25.313 filename=/dev/nvme0n1 00:13:25.313 [job1] 00:13:25.313 filename=/dev/nvme0n2 00:13:25.313 [job2] 00:13:25.313 filename=/dev/nvme0n3 00:13:25.313 [job3] 00:13:25.313 filename=/dev/nvme0n4 00:13:25.313 Could not set queue depth (nvme0n1) 00:13:25.313 Could not set queue depth (nvme0n2) 00:13:25.313 Could not set queue depth (nvme0n3) 00:13:25.313 Could not set queue depth (nvme0n4) 00:13:25.573 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.573 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.573 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.573 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.573 fio-3.35 00:13:25.573 Starting 4 threads 00:13:26.960 00:13:26.960 job0: (groupid=0, jobs=1): err= 0: pid=3745689: Wed Nov 6 10:05:30 2024 00:13:26.960 read: IOPS=18, BW=73.4KiB/s (75.2kB/s)(76.0KiB/1035msec) 00:13:26.960 slat (nsec): min=26243, max=27848, avg=26824.53, stdev=448.58 00:13:26.960 clat (usec): min=967, max=42879, avg=39088.94, stdev=9245.92 00:13:26.960 lat (usec): min=993, max=42906, avg=39115.76, stdev=9245.94 00:13:26.960 clat percentiles (usec): 00:13:26.960 | 1.00th=[ 971], 5.00th=[ 971], 10.00th=[40633], 20.00th=[41157], 00:13:26.960 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:26.960 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42730], 00:13:26.960 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:13:26.960 | 99.99th=[42730] 00:13:26.960 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:13:26.960 slat (nsec): min=8780, max=52750, avg=27976.22, stdev=10538.64 00:13:26.960 clat (usec): min=197, max=1034, avg=533.66, stdev=174.69 00:13:26.960 lat (usec): min=206, max=1067, avg=561.63, stdev=179.45 00:13:26.960 clat percentiles (usec): 00:13:26.960 | 1.00th=[ 223], 5.00th=[ 253], 10.00th=[ 306], 20.00th=[ 367], 00:13:26.960 | 30.00th=[ 429], 40.00th=[ 478], 50.00th=[ 523], 60.00th=[ 586], 00:13:26.960 | 70.00th=[ 635], 80.00th=[ 685], 90.00th=[ 766], 95.00th=[ 824], 00:13:26.960 | 99.00th=[ 914], 99.50th=[ 979], 99.90th=[ 1037], 99.95th=[ 1037], 00:13:26.960 | 99.99th=[ 1037] 00:13:26.960 bw ( KiB/s): min= 4096, max= 4096, per=31.15%, avg=4096.00, stdev= 0.00, samples=1 00:13:26.960 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:26.960 lat (usec) : 250=4.52%, 500=38.98%, 750=42.00%, 1000=10.92% 00:13:26.960 lat (msec) : 2=0.19%, 50=3.39% 00:13:26.960 cpu : usr=0.97%, sys=1.84%, ctx=531, majf=0, minf=1 00:13:26.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.960 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.960 job1: (groupid=0, jobs=1): err= 0: pid=3745690: Wed Nov 6 10:05:30 2024 00:13:26.960 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:26.960 slat (nsec): min=26327, max=59209, avg=27315.15, stdev=2928.44 00:13:26.960 clat (usec): min=573, max=3811, avg=1089.69, stdev=176.91 00:13:26.960 lat (usec): min=600, max=3838, avg=1117.01, stdev=176.82 00:13:26.960 clat percentiles (usec): 00:13:26.961 | 1.00th=[ 783], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 988], 00:13:26.961 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1123], 00:13:26.961 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1254], 00:13:26.961 | 99.00th=[ 1336], 99.50th=[ 1811], 99.90th=[ 3818], 99.95th=[ 3818], 00:13:26.961 | 99.99th=[ 3818] 00:13:26.961 write: IOPS=633, BW=2533KiB/s (2594kB/s)(2536KiB/1001msec); 0 zone resets 00:13:26.961 slat (nsec): min=8826, max=58634, avg=29772.90, stdev=9453.65 00:13:26.961 clat (usec): min=285, max=2422, avg=630.98, stdev=150.55 00:13:26.961 lat (usec): min=295, max=2455, avg=660.76, stdev=153.93 00:13:26.961 clat percentiles (usec): 00:13:26.961 | 1.00th=[ 343], 5.00th=[ 408], 10.00th=[ 449], 20.00th=[ 506], 00:13:26.961 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 668], 00:13:26.961 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 848], 00:13:26.961 | 99.00th=[ 947], 99.50th=[ 979], 99.90th=[ 2409], 99.95th=[ 2409], 00:13:26.961 | 99.99th=[ 2409] 00:13:26.961 bw ( KiB/s): min= 4096, max= 4096, per=31.15%, avg=4096.00, stdev= 0.00, samples=1 00:13:26.961 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:26.961 lat (usec) : 500=10.12%, 750=35.43%, 1000=19.72% 00:13:26.961 lat (msec) : 2=34.55%, 4=0.17% 00:13:26.961 cpu : usr=2.50%, sys=4.40%, ctx=1146, majf=0, minf=1 00:13:26.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.961 issued rwts: total=512,634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.961 job2: (groupid=0, jobs=1): err= 0: pid=3745691: Wed Nov 6 10:05:30 2024 00:13:26.961 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:26.961 slat (nsec): min=7247, max=60186, avg=25316.37, stdev=6031.99 00:13:26.961 clat (usec): min=372, max=42742, avg=1070.13, stdev=3632.84 00:13:26.961 lat (usec): min=381, max=42767, avg=1095.45, stdev=3632.85 00:13:26.961 clat percentiles (usec): 00:13:26.961 | 1.00th=[ 457], 5.00th=[ 529], 10.00th=[ 578], 20.00th=[ 619], 00:13:26.961 | 30.00th=[ 668], 40.00th=[ 709], 50.00th=[ 758], 60.00th=[ 799], 00:13:26.961 | 70.00th=[ 840], 80.00th=[ 873], 90.00th=[ 922], 95.00th=[ 955], 00:13:26.961 | 99.00th=[ 1045], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:13:26.961 | 99.99th=[42730] 00:13:26.961 write: IOPS=804, BW=3217KiB/s (3294kB/s)(3220KiB/1001msec); 0 zone resets 00:13:26.961 slat (nsec): min=9455, max=70476, avg=30557.50, stdev=7427.32 00:13:26.961 clat (usec): min=177, max=2602, avg=501.93, stdev=164.66 00:13:26.961 lat (usec): min=187, max=2633, avg=532.49, stdev=165.80 00:13:26.961 clat percentiles (usec): 00:13:26.961 | 1.00th=[ 265], 5.00th=[ 289], 10.00th=[ 314], 20.00th=[ 371], 00:13:26.961 | 30.00th=[ 412], 40.00th=[ 449], 50.00th=[ 494], 60.00th=[ 537], 00:13:26.961 | 70.00th=[ 578], 80.00th=[ 627], 90.00th=[ 685], 95.00th=[ 725], 00:13:26.961 | 99.00th=[ 816], 99.50th=[ 857], 99.90th=[ 2606], 99.95th=[ 2606], 00:13:26.961 | 99.99th=[ 2606] 00:13:26.961 bw ( KiB/s): min= 4096, max= 4096, per=31.15%, avg=4096.00, stdev= 0.00, samples=1 00:13:26.961 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:26.961 lat (usec) : 250=0.30%, 500=32.73%, 750=45.33%, 1000=20.73% 00:13:26.961 lat (msec) : 2=0.46%, 4=0.15%, 50=0.30% 00:13:26.961 cpu : usr=1.40%, sys=4.40%, ctx=1317, majf=0, minf=1 00:13:26.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.961 issued rwts: total=512,805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.961 job3: (groupid=0, jobs=1): err= 0: pid=3745692: Wed Nov 6 10:05:30 2024 00:13:26.961 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:26.961 slat (nsec): min=4117, max=45820, avg=18764.21, stdev=7597.50 00:13:26.961 clat (usec): min=139, max=1136, avg=493.21, stdev=103.02 00:13:26.961 lat (usec): min=143, max=1152, avg=511.98, stdev=104.86 00:13:26.961 clat percentiles (usec): 00:13:26.961 | 1.00th=[ 237], 5.00th=[ 289], 10.00th=[ 338], 20.00th=[ 416], 00:13:26.961 | 30.00th=[ 461], 40.00th=[ 494], 50.00th=[ 519], 60.00th=[ 537], 00:13:26.961 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 594], 95.00th=[ 619], 00:13:26.961 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 979], 99.95th=[ 1139], 00:13:26.961 | 99.99th=[ 1139] 00:13:26.961 write: IOPS=1449, BW=5798KiB/s (5937kB/s)(5804KiB/1001msec); 0 zone resets 00:13:26.961 slat (nsec): min=5459, max=48268, avg=18845.91, stdev=9612.49 00:13:26.961 clat (usec): min=91, max=3402, avg=300.33, stdev=125.81 00:13:26.961 lat (usec): min=99, max=3408, avg=319.18, stdev=127.62 00:13:26.961 clat percentiles (usec): 00:13:26.961 | 1.00th=[ 101], 5.00th=[ 114], 10.00th=[ 149], 20.00th=[ 237], 00:13:26.961 | 30.00th=[ 260], 40.00th=[ 281], 50.00th=[ 297], 60.00th=[ 326], 00:13:26.961 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 400], 95.00th=[ 433], 00:13:26.961 | 99.00th=[ 529], 99.50th=[ 652], 99.90th=[ 1057], 99.95th=[ 3392], 00:13:26.961 | 99.99th=[ 3392] 00:13:26.961 bw ( KiB/s): min= 5120, max= 5120, per=38.94%, avg=5120.00, stdev= 0.00, samples=1 00:13:26.961 iops : min= 1280, max= 1280, avg=1280.00, stdev= 0.00, samples=1 00:13:26.961 lat (usec) : 100=0.53%, 250=14.99%, 500=59.76%, 750=24.40%, 1000=0.20% 00:13:26.961 lat (msec) : 2=0.08%, 4=0.04% 00:13:26.961 cpu : usr=2.40%, sys=4.60%, ctx=2477, majf=0, minf=1 00:13:26.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.961 issued rwts: total=1024,1451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.961 00:13:26.961 Run status group 0 (all jobs): 00:13:26.961 READ: bw=7988KiB/s (8180kB/s), 73.4KiB/s-4092KiB/s (75.2kB/s-4190kB/s), io=8268KiB (8466kB), run=1001-1035msec 00:13:26.961 WRITE: bw=12.8MiB/s (13.5MB/s), 1979KiB/s-5798KiB/s (2026kB/s-5937kB/s), io=13.3MiB (13.9MB), run=1001-1035msec 00:13:26.961 00:13:26.961 Disk stats (read/write): 00:13:26.961 nvme0n1: ios=64/512, merge=0/0, ticks=673/225, in_queue=898, util=95.59% 00:13:26.961 nvme0n2: ios=472/512, merge=0/0, ticks=479/247, in_queue=726, util=87.76% 00:13:26.961 nvme0n3: ios=512/538, merge=0/0, ticks=538/233, in_queue=771, util=88.37% 00:13:26.961 nvme0n4: ios=975/1024, merge=0/0, ticks=1094/318, in_queue=1412, util=99.79% 00:13:26.961 10:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:26.961 [global] 00:13:26.961 thread=1 00:13:26.961 invalidate=1 00:13:26.961 rw=write 00:13:26.961 time_based=1 00:13:26.961 runtime=1 00:13:26.961 ioengine=libaio 00:13:26.961 direct=1 00:13:26.961 bs=4096 00:13:26.961 iodepth=128 00:13:26.961 norandommap=0 00:13:26.961 numjobs=1 00:13:26.961 00:13:26.961 verify_dump=1 00:13:26.961 verify_backlog=512 00:13:26.961 verify_state_save=0 00:13:26.961 do_verify=1 00:13:26.961 verify=crc32c-intel 00:13:26.961 [job0] 00:13:26.961 filename=/dev/nvme0n1 00:13:26.961 [job1] 00:13:26.961 filename=/dev/nvme0n2 00:13:26.961 [job2] 00:13:26.961 filename=/dev/nvme0n3 00:13:26.961 [job3] 00:13:26.961 filename=/dev/nvme0n4 00:13:26.961 Could not set queue depth (nvme0n1) 00:13:26.961 Could not set queue depth (nvme0n2) 00:13:26.961 Could not set queue depth (nvme0n3) 00:13:26.961 Could not set queue depth (nvme0n4) 00:13:27.223 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.223 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.223 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.223 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.223 fio-3.35 00:13:27.223 Starting 4 threads 00:13:28.609 00:13:28.609 job0: (groupid=0, jobs=1): err= 0: pid=3746216: Wed Nov 6 10:05:31 2024 00:13:28.609 read: IOPS=5565, BW=21.7MiB/s (22.8MB/s)(22.0MiB/1012msec) 00:13:28.609 slat (nsec): min=956, max=12354k, avg=87703.74, stdev=696766.83 00:13:28.609 clat (usec): min=3611, max=38158, avg=11959.79, stdev=5554.37 00:13:28.609 lat (usec): min=3617, max=43915, avg=12047.49, stdev=5611.81 00:13:28.609 clat percentiles (usec): 00:13:28.609 | 1.00th=[ 4883], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7308], 00:13:28.609 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9896], 60.00th=[11469], 00:13:28.609 | 70.00th=[13829], 80.00th=[16909], 90.00th=[20579], 95.00th=[23725], 00:13:28.609 | 99.00th=[30016], 99.50th=[30278], 99.90th=[31589], 99.95th=[38011], 00:13:28.609 | 99.99th=[38011] 00:13:28.609 write: IOPS=5776, BW=22.6MiB/s (23.7MB/s)(22.8MiB/1012msec); 0 zone resets 00:13:28.609 slat (nsec): min=1649, max=13256k, avg=80822.91, stdev=623450.47 00:13:28.609 clat (usec): min=1207, max=47760, avg=10436.41, stdev=6989.55 00:13:28.609 lat (usec): min=1217, max=47771, avg=10517.24, stdev=7037.21 00:13:28.609 clat percentiles (usec): 00:13:28.609 | 1.00th=[ 3785], 5.00th=[ 4293], 10.00th=[ 4621], 20.00th=[ 5604], 00:13:28.609 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 8356], 60.00th=[ 9634], 00:13:28.609 | 70.00th=[11338], 80.00th=[13960], 90.00th=[18482], 95.00th=[22676], 00:13:28.609 | 99.00th=[46400], 99.50th=[46924], 99.90th=[47973], 99.95th=[47973], 00:13:28.609 | 99.99th=[47973] 00:13:28.609 bw ( KiB/s): min=22488, max=23264, per=23.61%, avg=22876.00, stdev=548.71, samples=2 00:13:28.609 iops : min= 5622, max= 5816, avg=5719.00, stdev=137.18, samples=2 00:13:28.609 lat (msec) : 2=0.13%, 4=0.78%, 10=54.74%, 20=33.80%, 50=10.55% 00:13:28.609 cpu : usr=4.35%, sys=6.53%, ctx=303, majf=0, minf=1 00:13:28.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:28.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.609 issued rwts: total=5632,5846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.609 job1: (groupid=0, jobs=1): err= 0: pid=3746217: Wed Nov 6 10:05:31 2024 00:13:28.609 read: IOPS=9660, BW=37.7MiB/s (39.6MB/s)(38.0MiB/1007msec) 00:13:28.609 slat (nsec): min=951, max=6491.3k, avg=53463.81, stdev=387341.80 00:13:28.609 clat (usec): min=2348, max=13612, avg=7019.03, stdev=1719.88 00:13:28.609 lat (usec): min=2353, max=13621, avg=7072.49, stdev=1738.98 00:13:28.609 clat percentiles (usec): 00:13:28.609 | 1.00th=[ 3294], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5669], 00:13:28.609 | 30.00th=[ 6063], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 7111], 00:13:28.609 | 70.00th=[ 7635], 80.00th=[ 8225], 90.00th=[ 9372], 95.00th=[10421], 00:13:28.609 | 99.00th=[11994], 99.50th=[12387], 99.90th=[13173], 99.95th=[13566], 00:13:28.609 | 99.99th=[13566] 00:13:28.609 write: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(39.5MiB/1007msec); 0 zone resets 00:13:28.609 slat (nsec): min=1592, max=5665.9k, avg=42309.53, stdev=242607.31 00:13:28.609 clat (usec): min=829, max=13173, avg=5882.50, stdev=1558.12 00:13:28.609 lat (usec): min=863, max=13182, avg=5924.81, stdev=1566.40 00:13:28.609 clat percentiles (usec): 00:13:28.609 | 1.00th=[ 1926], 5.00th=[ 3097], 10.00th=[ 3687], 20.00th=[ 4490], 00:13:28.609 | 30.00th=[ 5342], 40.00th=[ 5800], 50.00th=[ 6194], 60.00th=[ 6521], 00:13:28.609 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7635], 00:13:28.609 | 99.00th=[10421], 99.50th=[11469], 99.90th=[12911], 99.95th=[12911], 00:13:28.609 | 99.99th=[13173] 00:13:28.609 bw ( KiB/s): min=37440, max=42360, per=41.18%, avg=39900.00, stdev=3478.97, samples=2 00:13:28.609 iops : min= 9360, max=10590, avg=9975.00, stdev=869.74, samples=2 00:13:28.609 lat (usec) : 1000=0.02% 00:13:28.609 lat (msec) : 2=0.53%, 4=7.17%, 10=88.24%, 20=4.05% 00:13:28.609 cpu : usr=7.16%, sys=6.76%, ctx=990, majf=0, minf=2 00:13:28.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:28.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.610 issued rwts: total=9728,10102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.610 job2: (groupid=0, jobs=1): err= 0: pid=3746219: Wed Nov 6 10:05:31 2024 00:13:28.610 read: IOPS=4909, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1003msec) 00:13:28.610 slat (nsec): min=971, max=10753k, avg=87672.70, stdev=652870.11 00:13:28.610 clat (usec): min=1062, max=40967, avg=11534.17, stdev=4965.38 00:13:28.610 lat (usec): min=4560, max=40976, avg=11621.84, stdev=5011.30 00:13:28.610 clat percentiles (usec): 00:13:28.610 | 1.00th=[ 4948], 5.00th=[ 7504], 10.00th=[ 7963], 20.00th=[ 8225], 00:13:28.610 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10421], 60.00th=[11207], 00:13:28.610 | 70.00th=[12125], 80.00th=[13566], 90.00th=[16909], 95.00th=[19530], 00:13:28.610 | 99.00th=[34341], 99.50th=[37487], 99.90th=[41157], 99.95th=[41157], 00:13:28.610 | 99.99th=[41157] 00:13:28.610 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:13:28.610 slat (nsec): min=1678, max=9046.4k, avg=93441.75, stdev=535980.27 00:13:28.610 clat (usec): min=1294, max=63555, avg=13511.72, stdev=12107.82 00:13:28.610 lat (usec): min=1305, max=63566, avg=13605.16, stdev=12191.08 00:13:28.610 clat percentiles (usec): 00:13:28.610 | 1.00th=[ 4113], 5.00th=[ 5473], 10.00th=[ 6718], 20.00th=[ 8029], 00:13:28.610 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[10028], 00:13:28.610 | 70.00th=[11076], 80.00th=[14877], 90.00th=[27132], 95.00th=[40633], 00:13:28.610 | 99.00th=[61604], 99.50th=[62129], 99.90th=[63701], 99.95th=[63701], 00:13:28.610 | 99.99th=[63701] 00:13:28.610 bw ( KiB/s): min=16384, max=24576, per=21.14%, avg=20480.00, stdev=5792.62, samples=2 00:13:28.610 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:13:28.610 lat (msec) : 2=0.03%, 4=0.45%, 10=52.72%, 20=37.73%, 50=6.85% 00:13:28.610 lat (msec) : 100=2.22% 00:13:28.610 cpu : usr=3.09%, sys=6.29%, ctx=477, majf=0, minf=1 00:13:28.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:28.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.610 issued rwts: total=4924,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.610 job3: (groupid=0, jobs=1): err= 0: pid=3746220: Wed Nov 6 10:05:31 2024 00:13:28.610 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:13:28.610 slat (nsec): min=915, max=18122k, avg=142709.72, stdev=1088905.86 00:13:28.610 clat (usec): min=1808, max=67922, avg=18514.07, stdev=9891.20 00:13:28.610 lat (usec): min=1848, max=67930, avg=18656.78, stdev=9960.60 00:13:28.610 clat percentiles (usec): 00:13:28.610 | 1.00th=[ 4424], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[12518], 00:13:28.610 | 30.00th=[14222], 40.00th=[15533], 50.00th=[16581], 60.00th=[18482], 00:13:28.610 | 70.00th=[20055], 80.00th=[21627], 90.00th=[24773], 95.00th=[40633], 00:13:28.610 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:13:28.610 | 99.99th=[67634] 00:13:28.610 write: IOPS=3405, BW=13.3MiB/s (13.9MB/s)(13.5MiB/1012msec); 0 zone resets 00:13:28.610 slat (nsec): min=1614, max=14336k, avg=156188.49, stdev=881922.79 00:13:28.610 clat (usec): min=716, max=75689, avg=20605.88, stdev=16266.03 00:13:28.610 lat (usec): min=836, max=75698, avg=20762.07, stdev=16371.66 00:13:28.610 clat percentiles (usec): 00:13:28.610 | 1.00th=[ 2180], 5.00th=[ 5932], 10.00th=[ 7963], 20.00th=[ 9765], 00:13:28.610 | 30.00th=[11469], 40.00th=[12911], 50.00th=[14484], 60.00th=[16057], 00:13:28.610 | 70.00th=[20841], 80.00th=[26084], 90.00th=[52167], 95.00th=[61080], 00:13:28.610 | 99.00th=[67634], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:13:28.610 | 99.99th=[76022] 00:13:28.610 bw ( KiB/s): min=12288, max=14264, per=13.70%, avg=13276.00, stdev=1397.24, samples=2 00:13:28.610 iops : min= 3072, max= 3566, avg=3319.00, stdev=349.31, samples=2 00:13:28.610 lat (usec) : 750=0.02%, 1000=0.02% 00:13:28.610 lat (msec) : 2=0.51%, 4=1.06%, 10=15.79%, 20=52.98%, 50=22.45% 00:13:28.610 lat (msec) : 100=7.20% 00:13:28.610 cpu : usr=1.98%, sys=3.96%, ctx=288, majf=0, minf=1 00:13:28.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:28.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.610 issued rwts: total=3072,3446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.610 00:13:28.610 Run status group 0 (all jobs): 00:13:28.610 READ: bw=90.2MiB/s (94.5MB/s), 11.9MiB/s-37.7MiB/s (12.4MB/s-39.6MB/s), io=91.2MiB (95.7MB), run=1003-1012msec 00:13:28.610 WRITE: bw=94.6MiB/s (99.2MB/s), 13.3MiB/s-39.2MiB/s (13.9MB/s-41.1MB/s), io=95.8MiB (100MB), run=1003-1012msec 00:13:28.610 00:13:28.610 Disk stats (read/write): 00:13:28.610 nvme0n1: ios=4658/4624, merge=0/0, ticks=54792/46200, in_queue=100992, util=87.68% 00:13:28.610 nvme0n2: ios=8228/8539, merge=0/0, ticks=54135/47517, in_queue=101652, util=92.05% 00:13:28.610 nvme0n3: ios=3642/3935, merge=0/0, ticks=40795/54581, in_queue=95376, util=96.52% 00:13:28.610 nvme0n4: ios=2560/2914, merge=0/0, ticks=35078/48562, in_queue=83640, util=89.42% 00:13:28.610 10:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:28.610 [global] 00:13:28.610 thread=1 00:13:28.610 invalidate=1 00:13:28.610 rw=randwrite 00:13:28.610 time_based=1 00:13:28.610 runtime=1 00:13:28.610 ioengine=libaio 00:13:28.610 direct=1 00:13:28.610 bs=4096 00:13:28.610 iodepth=128 00:13:28.610 norandommap=0 00:13:28.610 numjobs=1 00:13:28.610 00:13:28.610 verify_dump=1 00:13:28.610 verify_backlog=512 00:13:28.610 verify_state_save=0 00:13:28.610 do_verify=1 00:13:28.610 verify=crc32c-intel 00:13:28.610 [job0] 00:13:28.610 filename=/dev/nvme0n1 00:13:28.610 [job1] 00:13:28.610 filename=/dev/nvme0n2 00:13:28.610 [job2] 00:13:28.610 filename=/dev/nvme0n3 00:13:28.610 [job3] 00:13:28.610 filename=/dev/nvme0n4 00:13:28.610 Could not set queue depth (nvme0n1) 00:13:28.610 Could not set queue depth (nvme0n2) 00:13:28.610 Could not set queue depth (nvme0n3) 00:13:28.610 Could not set queue depth (nvme0n4) 00:13:28.872 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.872 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.872 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.872 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.872 fio-3.35 00:13:28.872 Starting 4 threads 00:13:30.257 00:13:30.257 job0: (groupid=0, jobs=1): err= 0: pid=3746744: Wed Nov 6 10:05:33 2024 00:13:30.257 read: IOPS=7097, BW=27.7MiB/s (29.1MB/s)(28.0MiB/1010msec) 00:13:30.257 slat (nsec): min=965, max=10945k, avg=71702.17, stdev=553488.30 00:13:30.257 clat (usec): min=2874, max=28479, avg=8832.06, stdev=3188.02 00:13:30.257 lat (usec): min=2878, max=28481, avg=8903.76, stdev=3234.80 00:13:30.257 clat percentiles (usec): 00:13:30.257 | 1.00th=[ 4686], 5.00th=[ 5800], 10.00th=[ 6194], 20.00th=[ 6456], 00:13:30.257 | 30.00th=[ 6783], 40.00th=[ 7570], 50.00th=[ 7963], 60.00th=[ 8586], 00:13:30.257 | 70.00th=[ 9765], 80.00th=[10683], 90.00th=[12125], 95.00th=[13566], 00:13:30.257 | 99.00th=[22676], 99.50th=[24511], 99.90th=[27395], 99.95th=[28443], 00:13:30.257 | 99.99th=[28443] 00:13:30.257 write: IOPS=7340, BW=28.7MiB/s (30.1MB/s)(29.0MiB/1010msec); 0 zone resets 00:13:30.257 slat (nsec): min=1570, max=6111.3k, avg=60288.31, stdev=306788.99 00:13:30.257 clat (usec): min=1438, max=28476, avg=8750.70, stdev=4773.49 00:13:30.257 lat (usec): min=2001, max=28478, avg=8810.99, stdev=4807.56 00:13:30.257 clat percentiles (usec): 00:13:30.257 | 1.00th=[ 2769], 5.00th=[ 3785], 10.00th=[ 4621], 20.00th=[ 5735], 00:13:30.257 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 7111], 60.00th=[ 7635], 00:13:30.257 | 70.00th=[ 7963], 80.00th=[11600], 90.00th=[17433], 95.00th=[18744], 00:13:30.257 | 99.00th=[22676], 99.50th=[23462], 99.90th=[25035], 99.95th=[27919], 00:13:30.257 | 99.99th=[28443] 00:13:30.257 bw ( KiB/s): min=21424, max=36864, per=32.47%, avg=29144.00, stdev=10917.73, samples=2 00:13:30.257 iops : min= 5356, max= 9216, avg=7286.00, stdev=2729.43, samples=2 00:13:30.257 lat (msec) : 2=0.01%, 4=3.63%, 10=71.12%, 20=23.07%, 50=2.17% 00:13:30.257 cpu : usr=4.06%, sys=7.63%, ctx=748, majf=0, minf=1 00:13:30.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:30.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.257 issued rwts: total=7168,7414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.257 job1: (groupid=0, jobs=1): err= 0: pid=3746745: Wed Nov 6 10:05:33 2024 00:13:30.257 read: IOPS=4360, BW=17.0MiB/s (17.9MB/s)(17.2MiB/1007msec) 00:13:30.257 slat (nsec): min=951, max=20787k, avg=119842.24, stdev=810156.30 00:13:30.257 clat (usec): min=2355, max=55464, avg=14994.68, stdev=8786.68 00:13:30.257 lat (usec): min=6791, max=55492, avg=15114.52, stdev=8860.84 00:13:30.257 clat percentiles (usec): 00:13:30.257 | 1.00th=[ 7570], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10552], 00:13:30.257 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12125], 00:13:30.257 | 70.00th=[13304], 80.00th=[15270], 90.00th=[26346], 95.00th=[40109], 00:13:30.257 | 99.00th=[47449], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:13:30.257 | 99.99th=[55313] 00:13:30.257 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:13:30.257 slat (nsec): min=1582, max=10012k, avg=99030.47, stdev=444930.94 00:13:30.257 clat (usec): min=7472, max=52888, avg=13342.18, stdev=6125.53 00:13:30.257 lat (usec): min=7473, max=52890, avg=13441.21, stdev=6163.90 00:13:30.257 clat percentiles (usec): 00:13:30.257 | 1.00th=[ 8029], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:13:30.257 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11469], 00:13:30.257 | 70.00th=[12125], 80.00th=[16057], 90.00th=[19268], 95.00th=[26608], 00:13:30.257 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:13:30.257 | 99.99th=[52691] 00:13:30.257 bw ( KiB/s): min=12288, max=24576, per=20.53%, avg=18432.00, stdev=8688.93, samples=2 00:13:30.257 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:13:30.257 lat (msec) : 4=0.01%, 10=13.96%, 20=74.97%, 50=10.83%, 100=0.22% 00:13:30.257 cpu : usr=2.58%, sys=4.57%, ctx=576, majf=0, minf=1 00:13:30.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:30.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.258 issued rwts: total=4391,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.258 job2: (groupid=0, jobs=1): err= 0: pid=3746746: Wed Nov 6 10:05:33 2024 00:13:30.258 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:13:30.258 slat (nsec): min=955, max=9694.3k, avg=68875.92, stdev=528049.86 00:13:30.258 clat (usec): min=3263, max=38105, avg=9655.51, stdev=3359.27 00:13:30.258 lat (usec): min=3271, max=38113, avg=9724.38, stdev=3395.80 00:13:30.258 clat percentiles (usec): 00:13:30.258 | 1.00th=[ 5145], 5.00th=[ 6783], 10.00th=[ 7373], 20.00th=[ 7963], 00:13:30.258 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:13:30.258 | 70.00th=[ 9896], 80.00th=[11076], 90.00th=[11994], 95.00th=[14353], 00:13:30.258 | 99.00th=[25822], 99.50th=[35914], 99.90th=[37487], 99.95th=[38011], 00:13:30.258 | 99.99th=[38011] 00:13:30.258 write: IOPS=6748, BW=26.4MiB/s (27.6MB/s)(26.5MiB/1004msec); 0 zone resets 00:13:30.258 slat (nsec): min=1540, max=9636.4k, avg=61784.63, stdev=409787.57 00:13:30.258 clat (usec): min=635, max=31974, avg=9320.81, stdev=4358.79 00:13:30.258 lat (usec): min=644, max=31977, avg=9382.59, stdev=4393.53 00:13:30.258 clat percentiles (usec): 00:13:30.258 | 1.00th=[ 1795], 5.00th=[ 3916], 10.00th=[ 4686], 20.00th=[ 6325], 00:13:30.258 | 30.00th=[ 7308], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:13:30.258 | 70.00th=[ 9634], 80.00th=[11731], 90.00th=[15926], 95.00th=[18744], 00:13:30.258 | 99.00th=[22152], 99.50th=[25822], 99.90th=[29492], 99.95th=[29492], 00:13:30.258 | 99.99th=[31851] 00:13:30.258 bw ( KiB/s): min=25136, max=28120, per=29.67%, avg=26628.00, stdev=2110.01, samples=2 00:13:30.258 iops : min= 6284, max= 7030, avg=6657.00, stdev=527.50, samples=2 00:13:30.258 lat (usec) : 750=0.04% 00:13:30.258 lat (msec) : 2=0.60%, 4=2.31%, 10=68.43%, 20=26.52%, 50=2.10% 00:13:30.258 cpu : usr=3.99%, sys=7.98%, ctx=554, majf=0, minf=2 00:13:30.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:30.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.258 issued rwts: total=6656,6775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.258 job3: (groupid=0, jobs=1): err= 0: pid=3746747: Wed Nov 6 10:05:33 2024 00:13:30.258 read: IOPS=4349, BW=17.0MiB/s (17.8MB/s)(17.7MiB/1043msec) 00:13:30.258 slat (nsec): min=984, max=26023k, avg=110416.60, stdev=859188.14 00:13:30.258 clat (usec): min=5051, max=62993, avg=15213.51, stdev=10841.61 00:13:30.258 lat (usec): min=5055, max=65248, avg=15323.92, stdev=10907.31 00:13:30.258 clat percentiles (usec): 00:13:30.258 | 1.00th=[ 6063], 5.00th=[ 7242], 10.00th=[ 8094], 20.00th=[ 8717], 00:13:30.258 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10814], 00:13:30.258 | 70.00th=[13829], 80.00th=[21627], 90.00th=[31327], 95.00th=[44827], 00:13:30.258 | 99.00th=[49546], 99.50th=[51119], 99.90th=[57410], 99.95th=[57410], 00:13:30.258 | 99.99th=[63177] 00:13:30.258 write: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1043msec); 0 zone resets 00:13:30.258 slat (nsec): min=1595, max=14934k, avg=103695.56, stdev=605591.05 00:13:30.258 clat (usec): min=4860, max=53941, avg=13546.28, stdev=7307.98 00:13:30.258 lat (usec): min=4869, max=53951, avg=13649.98, stdev=7362.13 00:13:30.258 clat percentiles (usec): 00:13:30.258 | 1.00th=[ 5997], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[ 8225], 00:13:30.258 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[10552], 60.00th=[13304], 00:13:30.258 | 70.00th=[15795], 80.00th=[17957], 90.00th=[22938], 95.00th=[27395], 00:13:30.258 | 99.00th=[47449], 99.50th=[49021], 99.90th=[53216], 99.95th=[53216], 00:13:30.258 | 99.99th=[53740] 00:13:30.258 bw ( KiB/s): min=16384, max=20480, per=20.53%, avg=18432.00, stdev=2896.31, samples=2 00:13:30.258 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:13:30.258 lat (msec) : 10=52.52%, 20=29.45%, 50=17.53%, 100=0.50% 00:13:30.258 cpu : usr=1.73%, sys=5.28%, ctx=600, majf=0, minf=1 00:13:30.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:30.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.258 issued rwts: total=4537,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.258 00:13:30.258 Run status group 0 (all jobs): 00:13:30.258 READ: bw=85.2MiB/s (89.3MB/s), 17.0MiB/s-27.7MiB/s (17.8MB/s-29.1MB/s), io=88.9MiB (93.2MB), run=1004-1043msec 00:13:30.258 WRITE: bw=87.7MiB/s (91.9MB/s), 17.3MiB/s-28.7MiB/s (18.1MB/s-30.1MB/s), io=91.4MiB (95.9MB), run=1004-1043msec 00:13:30.258 00:13:30.258 Disk stats (read/write): 00:13:30.258 nvme0n1: ios=6431/6656, merge=0/0, ticks=50793/50362, in_queue=101155, util=87.98% 00:13:30.258 nvme0n2: ios=4091/4096, merge=0/0, ticks=18130/15514, in_queue=33644, util=100.00% 00:13:30.258 nvme0n3: ios=5172/5632, merge=0/0, ticks=49287/52340, in_queue=101627, util=97.04% 00:13:30.258 nvme0n4: ios=3191/3584, merge=0/0, ticks=25689/25978, in_queue=51667, util=97.01% 00:13:30.258 10:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:30.258 10:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3747079 00:13:30.258 10:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:30.258 10:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:30.258 [global] 00:13:30.258 thread=1 00:13:30.258 invalidate=1 00:13:30.258 rw=read 00:13:30.258 time_based=1 00:13:30.258 runtime=10 00:13:30.258 ioengine=libaio 00:13:30.258 direct=1 00:13:30.258 bs=4096 00:13:30.258 iodepth=1 00:13:30.258 norandommap=1 00:13:30.258 numjobs=1 00:13:30.258 00:13:30.258 [job0] 00:13:30.258 filename=/dev/nvme0n1 00:13:30.258 [job1] 00:13:30.258 filename=/dev/nvme0n2 00:13:30.258 [job2] 00:13:30.258 filename=/dev/nvme0n3 00:13:30.258 [job3] 00:13:30.258 filename=/dev/nvme0n4 00:13:30.258 Could not set queue depth (nvme0n1) 00:13:30.258 Could not set queue depth (nvme0n2) 00:13:30.258 Could not set queue depth (nvme0n3) 00:13:30.258 Could not set queue depth (nvme0n4) 00:13:30.594 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.594 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.594 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.594 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.594 fio-3.35 00:13:30.594 Starting 4 threads 00:13:33.223 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:33.483 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=8241152, buflen=4096 00:13:33.483 fio: pid=3747269, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:33.483 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:33.483 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2932736, buflen=4096 00:13:33.483 fio: pid=3747268, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:33.483 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.483 10:05:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:33.743 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.743 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:33.743 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=290816, buflen=4096 00:13:33.743 fio: pid=3747265, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:34.004 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.004 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:34.004 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=319488, buflen=4096 00:13:34.004 fio: pid=3747266, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:34.004 00:13:34.004 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3747265: Wed Nov 6 10:05:37 2024 00:13:34.004 read: IOPS=24, BW=95.2KiB/s (97.5kB/s)(284KiB/2984msec) 00:13:34.004 slat (usec): min=25, max=25625, avg=461.22, stdev=3080.14 00:13:34.004 clat (usec): min=1064, max=42934, avg=41242.89, stdev=4858.30 00:13:34.004 lat (usec): min=1131, max=68005, avg=41710.23, stdev=5846.84 00:13:34.004 clat percentiles (usec): 00:13:34.004 | 1.00th=[ 1057], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:34.005 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:13:34.005 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:34.005 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:13:34.005 | 99.99th=[42730] 00:13:34.005 bw ( KiB/s): min= 96, max= 96, per=2.65%, avg=96.00, stdev= 0.00, samples=5 00:13:34.005 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:13:34.005 lat (msec) : 2=1.39%, 50=97.22% 00:13:34.005 cpu : usr=0.13%, sys=0.00%, ctx=75, majf=0, minf=1 00:13:34.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.005 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.005 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.005 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3747266: Wed Nov 6 10:05:37 2024 00:13:34.005 read: IOPS=24, BW=98.2KiB/s (101kB/s)(312KiB/3178msec) 00:13:34.005 slat (usec): min=8, max=5573, avg=98.16, stdev=624.44 00:13:34.005 clat (usec): min=920, max=43015, avg=40354.33, stdev=7939.41 00:13:34.005 lat (usec): min=928, max=46866, avg=40453.42, stdev=7972.89 00:13:34.005 clat percentiles (usec): 00:13:34.005 | 1.00th=[ 922], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:13:34.005 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:34.005 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:13:34.005 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:34.005 | 99.99th=[43254] 00:13:34.005 bw ( KiB/s): min= 96, max= 112, per=2.71%, avg=98.67, stdev= 6.53, samples=6 00:13:34.005 iops : min= 24, max= 28, avg=24.67, stdev= 1.63, samples=6 00:13:34.005 lat (usec) : 1000=2.53% 00:13:34.005 lat (msec) : 2=1.27%, 50=94.94% 00:13:34.005 cpu : usr=0.09%, sys=0.00%, ctx=81, majf=0, minf=2 00:13:34.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.005 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.005 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.005 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3747268: Wed Nov 6 10:05:37 2024 00:13:34.005 read: IOPS=257, BW=1028KiB/s (1053kB/s)(2864KiB/2786msec) 00:13:34.005 slat (nsec): min=6816, max=55500, avg=23147.76, stdev=7282.54 00:13:34.005 clat (usec): min=527, max=42659, avg=3831.42, stdev=10274.22 00:13:34.005 lat (usec): min=552, max=42694, avg=3854.56, stdev=10274.80 00:13:34.005 clat percentiles (usec): 00:13:34.005 | 1.00th=[ 783], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 979], 00:13:34.005 | 30.00th=[ 1004], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1090], 00:13:34.005 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1205], 95.00th=[41681], 00:13:34.005 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:13:34.005 | 99.99th=[42730] 00:13:34.005 bw ( KiB/s): min= 96, max= 2832, per=31.18%, avg=1129.60, stdev=1412.57, samples=5 00:13:34.005 iops : min= 24, max= 708, avg=282.40, stdev=353.14, samples=5 00:13:34.005 lat (usec) : 750=0.56%, 1000=27.20% 00:13:34.005 lat (msec) : 2=65.27%, 50=6.83% 00:13:34.005 cpu : usr=0.14%, sys=0.83%, ctx=717, majf=0, minf=2 00:13:34.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.005 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.005 issued rwts: total=717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.005 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3747269: Wed Nov 6 10:05:37 2024 00:13:34.005 read: IOPS=773, BW=3092KiB/s (3166kB/s)(8048KiB/2603msec) 00:13:34.005 slat (nsec): min=6999, max=67269, avg=23683.50, stdev=6888.33 00:13:34.005 clat (usec): min=428, max=42935, avg=1253.52, stdev=4055.95 00:13:34.005 lat (usec): min=435, max=42960, avg=1277.20, stdev=4055.46 00:13:34.005 clat percentiles (usec): 00:13:34.005 | 1.00th=[ 519], 5.00th=[ 594], 10.00th=[ 652], 20.00th=[ 725], 00:13:34.005 | 30.00th=[ 775], 40.00th=[ 824], 50.00th=[ 857], 60.00th=[ 881], 00:13:34.005 | 70.00th=[ 922], 80.00th=[ 963], 90.00th=[ 1037], 95.00th=[ 1074], 00:13:34.005 | 99.00th=[11600], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:13:34.005 | 99.99th=[42730] 00:13:34.005 bw ( KiB/s): min= 1248, max= 4704, per=88.26%, avg=3196.80, stdev=1618.79, samples=5 00:13:34.005 iops : min= 312, max= 1176, avg=799.20, stdev=404.70, samples=5 00:13:34.005 lat (usec) : 500=0.70%, 750=22.70%, 1000=62.20% 00:13:34.005 lat (msec) : 2=13.31%, 20=0.05%, 50=0.99% 00:13:34.005 cpu : usr=0.50%, sys=2.42%, ctx=2013, majf=0, minf=2 00:13:34.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.005 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.005 issued rwts: total=2013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.005 00:13:34.005 Run status group 0 (all jobs): 00:13:34.005 READ: bw=3621KiB/s (3708kB/s), 95.2KiB/s-3092KiB/s (97.5kB/s-3166kB/s), io=11.2MiB (11.8MB), run=2603-3178msec 00:13:34.005 00:13:34.005 Disk stats (read/write): 00:13:34.005 nvme0n1: ios=68/0, merge=0/0, ticks=2804/0, in_queue=2804, util=93.92% 00:13:34.005 nvme0n2: ios=76/0, merge=0/0, ticks=3068/0, in_queue=3068, util=95.51% 00:13:34.005 nvme0n3: ios=709/0, merge=0/0, ticks=2495/0, in_queue=2495, util=95.99% 00:13:34.005 nvme0n4: ios=2012/0, merge=0/0, ticks=2486/0, in_queue=2486, util=96.42% 00:13:34.005 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.005 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:34.265 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.266 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:34.525 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.525 10:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:34.785 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.785 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:34.785 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:34.785 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3747079 00:13:34.785 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:34.785 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:35.045 nvmf hotplug test: fio failed as expected 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:35.045 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:35.045 rmmod nvme_tcp 00:13:35.305 rmmod nvme_fabrics 00:13:35.305 rmmod nvme_keyring 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3743435 ']' 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3743435 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3743435 ']' 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3743435 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3743435 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3743435' 00:13:35.305 killing process with pid 3743435 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3743435 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3743435 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:35.305 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:35.565 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.565 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:35.565 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.565 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.565 10:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.475 10:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:37.475 00:13:37.475 real 0m30.194s 00:13:37.475 user 2m26.026s 00:13:37.475 sys 0m10.208s 00:13:37.475 10:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:37.475 10:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.475 ************************************ 00:13:37.475 END TEST nvmf_fio_target 00:13:37.475 ************************************ 00:13:37.475 10:05:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:37.475 10:05:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:37.475 10:05:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:37.475 10:05:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:37.475 ************************************ 00:13:37.475 START TEST nvmf_bdevio 00:13:37.475 ************************************ 00:13:37.475 10:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:37.736 * Looking for test storage... 00:13:37.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:37.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.736 --rc genhtml_branch_coverage=1 00:13:37.736 --rc genhtml_function_coverage=1 00:13:37.736 --rc genhtml_legend=1 00:13:37.736 --rc geninfo_all_blocks=1 00:13:37.736 --rc geninfo_unexecuted_blocks=1 00:13:37.736 00:13:37.736 ' 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:37.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.736 --rc genhtml_branch_coverage=1 00:13:37.736 --rc genhtml_function_coverage=1 00:13:37.736 --rc genhtml_legend=1 00:13:37.736 --rc geninfo_all_blocks=1 00:13:37.736 --rc geninfo_unexecuted_blocks=1 00:13:37.736 00:13:37.736 ' 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:37.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.736 --rc genhtml_branch_coverage=1 00:13:37.736 --rc genhtml_function_coverage=1 00:13:37.736 --rc genhtml_legend=1 00:13:37.736 --rc geninfo_all_blocks=1 00:13:37.736 --rc geninfo_unexecuted_blocks=1 00:13:37.736 00:13:37.736 ' 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:37.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.736 --rc genhtml_branch_coverage=1 00:13:37.736 --rc genhtml_function_coverage=1 00:13:37.736 --rc genhtml_legend=1 00:13:37.736 --rc geninfo_all_blocks=1 00:13:37.736 --rc geninfo_unexecuted_blocks=1 00:13:37.736 00:13:37.736 ' 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.736 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:37.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:37.737 10:05:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:45.870 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:45.870 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:45.870 Found net devices under 0000:31:00.0: cvl_0_0 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:45.870 Found net devices under 0000:31:00.1: cvl_0_1 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:45.870 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.871 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:46.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:13:46.131 00:13:46.131 --- 10.0.0.2 ping statistics --- 00:13:46.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.131 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:13:46.131 00:13:46.131 --- 10.0.0.1 ping statistics --- 00:13:46.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.131 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:46.131 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3752995 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3752995 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3752995 ']' 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:46.390 10:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:46.390 [2024-11-06 10:05:49.700160] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:46.390 [2024-11-06 10:05:49.700228] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.390 [2024-11-06 10:05:49.808286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.390 [2024-11-06 10:05:49.844261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.390 [2024-11-06 10:05:49.844293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.390 [2024-11-06 10:05:49.844301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.390 [2024-11-06 10:05:49.844308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.390 [2024-11-06 10:05:49.844313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.390 [2024-11-06 10:05:49.845954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:46.390 [2024-11-06 10:05:49.846212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:46.390 [2024-11-06 10:05:49.846328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.390 [2024-11-06 10:05:49.846328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.329 [2024-11-06 10:05:50.575158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.329 Malloc0 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.329 [2024-11-06 10:05:50.656381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:47.329 { 00:13:47.329 "params": { 00:13:47.329 "name": "Nvme$subsystem", 00:13:47.329 "trtype": "$TEST_TRANSPORT", 00:13:47.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:47.329 "adrfam": "ipv4", 00:13:47.329 "trsvcid": "$NVMF_PORT", 00:13:47.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:47.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:47.329 "hdgst": ${hdgst:-false}, 00:13:47.329 "ddgst": ${ddgst:-false} 00:13:47.329 }, 00:13:47.329 "method": "bdev_nvme_attach_controller" 00:13:47.329 } 00:13:47.329 EOF 00:13:47.329 )") 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:47.329 10:05:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:47.329 "params": { 00:13:47.329 "name": "Nvme1", 00:13:47.329 "trtype": "tcp", 00:13:47.329 "traddr": "10.0.0.2", 00:13:47.329 "adrfam": "ipv4", 00:13:47.329 "trsvcid": "4420", 00:13:47.329 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.329 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.329 "hdgst": false, 00:13:47.329 "ddgst": false 00:13:47.329 }, 00:13:47.329 "method": "bdev_nvme_attach_controller" 00:13:47.329 }' 00:13:47.329 [2024-11-06 10:05:50.714421] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:47.329 [2024-11-06 10:05:50.714490] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753348 ] 00:13:47.329 [2024-11-06 10:05:50.801050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:47.589 [2024-11-06 10:05:50.845258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.589 [2024-11-06 10:05:50.845377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.589 [2024-11-06 10:05:50.845380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.589 I/O targets: 00:13:47.589 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:47.589 00:13:47.589 00:13:47.589 CUnit - A unit testing framework for C - Version 2.1-3 00:13:47.589 http://cunit.sourceforge.net/ 00:13:47.589 00:13:47.589 00:13:47.589 Suite: bdevio tests on: Nvme1n1 00:13:47.848 Test: blockdev write read block ...passed 00:13:47.848 Test: blockdev write zeroes read block ...passed 00:13:47.848 Test: blockdev write zeroes read no split ...passed 00:13:47.848 Test: blockdev write zeroes read split ...passed 00:13:47.848 Test: blockdev write zeroes read split partial ...passed 00:13:47.848 Test: blockdev reset ...[2024-11-06 10:05:51.235702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:47.848 [2024-11-06 10:05:51.235764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb34b0 (9): Bad file descriptor 00:13:47.848 [2024-11-06 10:05:51.346625] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:47.848 passed 00:13:47.848 Test: blockdev write read 8 blocks ...passed 00:13:48.109 Test: blockdev write read size > 128k ...passed 00:13:48.109 Test: blockdev write read invalid size ...passed 00:13:48.109 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:48.109 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:48.109 Test: blockdev write read max offset ...passed 00:13:48.109 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:48.109 Test: blockdev writev readv 8 blocks ...passed 00:13:48.109 Test: blockdev writev readv 30 x 1block ...passed 00:13:48.109 Test: blockdev writev readv block ...passed 00:13:48.109 Test: blockdev writev readv size > 128k ...passed 00:13:48.109 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:48.109 Test: blockdev comparev and writev ...[2024-11-06 10:05:51.570390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.109 [2024-11-06 10:05:51.570415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:48.109 [2024-11-06 10:05:51.570426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.109 [2024-11-06 10:05:51.570432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:48.109 [2024-11-06 10:05:51.570928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.109 [2024-11-06 10:05:51.570937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:48.109 [2024-11-06 10:05:51.570947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.109 [2024-11-06 10:05:51.570952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:48.109 [2024-11-06 10:05:51.571430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.109 [2024-11-06 10:05:51.571438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:48.109 [2024-11-06 10:05:51.571447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.109 [2024-11-06 10:05:51.571453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:48.109 [2024-11-06 10:05:51.571968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.109 [2024-11-06 10:05:51.571976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:48.109 [2024-11-06 10:05:51.571986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.109 [2024-11-06 10:05:51.571991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:48.369 passed 00:13:48.369 Test: blockdev nvme passthru rw ...passed 00:13:48.369 Test: blockdev nvme passthru vendor specific ...[2024-11-06 10:05:51.655742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.369 [2024-11-06 10:05:51.655752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:48.369 [2024-11-06 10:05:51.656078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.369 [2024-11-06 10:05:51.656086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:48.369 [2024-11-06 10:05:51.656413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.369 [2024-11-06 10:05:51.656421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:48.369 [2024-11-06 10:05:51.656739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.369 [2024-11-06 10:05:51.656747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:48.369 passed 00:13:48.369 Test: blockdev nvme admin passthru ...passed 00:13:48.369 Test: blockdev copy ...passed 00:13:48.369 00:13:48.369 Run Summary: Type Total Ran Passed Failed Inactive 00:13:48.369 suites 1 1 n/a 0 0 00:13:48.369 tests 23 23 23 0 0 00:13:48.369 asserts 152 152 152 0 n/a 00:13:48.369 00:13:48.369 Elapsed time = 1.312 seconds 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:48.369 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:48.369 rmmod nvme_tcp 00:13:48.369 rmmod nvme_fabrics 00:13:48.630 rmmod nvme_keyring 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3752995 ']' 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3752995 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3752995 ']' 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3752995 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3752995 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3752995' 00:13:48.630 killing process with pid 3752995 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3752995 00:13:48.630 10:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3752995 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.889 10:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.797 10:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:50.797 00:13:50.797 real 0m13.289s 00:13:50.797 user 0m13.856s 00:13:50.797 sys 0m6.985s 00:13:50.797 10:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:50.797 10:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:50.797 ************************************ 00:13:50.797 END TEST nvmf_bdevio 00:13:50.797 ************************************ 00:13:50.797 10:05:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:50.797 00:13:50.797 real 5m15.078s 00:13:50.797 user 11m38.186s 00:13:50.797 sys 1m57.442s 00:13:50.797 10:05:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:50.797 10:05:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:50.797 ************************************ 00:13:50.797 END TEST nvmf_target_core 00:13:50.797 ************************************ 00:13:51.057 10:05:54 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:51.057 10:05:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:51.057 10:05:54 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:51.057 10:05:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:51.057 ************************************ 00:13:51.057 START TEST nvmf_target_extra 00:13:51.057 ************************************ 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:51.057 * Looking for test storage... 00:13:51.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:51.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.057 --rc genhtml_branch_coverage=1 00:13:51.057 --rc genhtml_function_coverage=1 00:13:51.057 --rc genhtml_legend=1 00:13:51.057 --rc geninfo_all_blocks=1 00:13:51.057 --rc geninfo_unexecuted_blocks=1 00:13:51.057 00:13:51.057 ' 00:13:51.057 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:51.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.057 --rc genhtml_branch_coverage=1 00:13:51.057 --rc genhtml_function_coverage=1 00:13:51.057 --rc genhtml_legend=1 00:13:51.058 --rc geninfo_all_blocks=1 00:13:51.058 --rc geninfo_unexecuted_blocks=1 00:13:51.058 00:13:51.058 ' 00:13:51.058 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:51.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.058 --rc genhtml_branch_coverage=1 00:13:51.058 --rc genhtml_function_coverage=1 00:13:51.058 --rc genhtml_legend=1 00:13:51.058 --rc geninfo_all_blocks=1 00:13:51.058 --rc geninfo_unexecuted_blocks=1 00:13:51.058 00:13:51.058 ' 00:13:51.058 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:51.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.058 --rc genhtml_branch_coverage=1 00:13:51.058 --rc genhtml_function_coverage=1 00:13:51.058 --rc genhtml_legend=1 00:13:51.058 --rc geninfo_all_blocks=1 00:13:51.058 --rc geninfo_unexecuted_blocks=1 00:13:51.058 00:13:51.058 ' 00:13:51.058 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.058 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:51.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:51.318 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.319 ************************************ 00:13:51.319 START TEST nvmf_example 00:13:51.319 ************************************ 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:51.319 * Looking for test storage... 00:13:51.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:51.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.319 --rc genhtml_branch_coverage=1 00:13:51.319 --rc genhtml_function_coverage=1 00:13:51.319 --rc genhtml_legend=1 00:13:51.319 --rc geninfo_all_blocks=1 00:13:51.319 --rc geninfo_unexecuted_blocks=1 00:13:51.319 00:13:51.319 ' 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:51.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.319 --rc genhtml_branch_coverage=1 00:13:51.319 --rc genhtml_function_coverage=1 00:13:51.319 --rc genhtml_legend=1 00:13:51.319 --rc geninfo_all_blocks=1 00:13:51.319 --rc geninfo_unexecuted_blocks=1 00:13:51.319 00:13:51.319 ' 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:51.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.319 --rc genhtml_branch_coverage=1 00:13:51.319 --rc genhtml_function_coverage=1 00:13:51.319 --rc genhtml_legend=1 00:13:51.319 --rc geninfo_all_blocks=1 00:13:51.319 --rc geninfo_unexecuted_blocks=1 00:13:51.319 00:13:51.319 ' 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:51.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.319 --rc genhtml_branch_coverage=1 00:13:51.319 --rc genhtml_function_coverage=1 00:13:51.319 --rc genhtml_legend=1 00:13:51.319 --rc geninfo_all_blocks=1 00:13:51.319 --rc geninfo_unexecuted_blocks=1 00:13:51.319 00:13:51.319 ' 00:13:51.319 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.579 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:51.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:13:51.580 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.720 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:59.721 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:59.721 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:59.721 Found net devices under 0000:31:00.0: cvl_0_0 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:59.721 Found net devices under 0000:31:00.1: cvl_0_1 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.721 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:59.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:13:59.982 00:13:59.982 --- 10.0.0.2 ping statistics --- 00:13:59.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.982 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:13:59.982 00:13:59.982 --- 10.0.0.1 ping statistics --- 00:13:59.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.982 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:59.982 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:59.983 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:59.983 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3758549 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3758549 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3758549 ']' 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:00.244 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:14:01.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:13.409 Initializing NVMe Controllers 00:14:13.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:13.409 Initialization complete. Launching workers. 00:14:13.409 ======================================================== 00:14:13.409 Latency(us) 00:14:13.409 Device Information : IOPS MiB/s Average min max 00:14:13.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18073.78 70.60 3540.48 673.14 20093.35 00:14:13.409 ======================================================== 00:14:13.409 Total : 18073.78 70.60 3540.48 673.14 20093.35 00:14:13.409 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:13.409 rmmod nvme_tcp 00:14:13.409 rmmod nvme_fabrics 00:14:13.409 rmmod nvme_keyring 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3758549 ']' 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3758549 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3758549 ']' 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3758549 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3758549 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3758549' 00:14:13.409 killing process with pid 3758549 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3758549 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3758549 00:14:13.409 nvmf threads initialize successfully 00:14:13.409 bdev subsystem init successfully 00:14:13.409 created a nvmf target service 00:14:13.409 create targets's poll groups done 00:14:13.409 all subsystems of target started 00:14:13.409 nvmf target is running 00:14:13.409 all subsystems of target stopped 00:14:13.409 destroy targets's poll groups done 00:14:13.409 destroyed the nvmf target service 00:14:13.409 bdev subsystem finish successfully 00:14:13.409 nvmf threads destroy successfully 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:13.409 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:13.409 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:14:13.409 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:14:13.409 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:13.409 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:14:13.409 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:13.409 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:13.409 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.409 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.409 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.669 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:13.669 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:13.669 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:13.669 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:13.669 00:14:13.669 real 0m22.503s 00:14:13.669 user 0m47.004s 00:14:13.669 sys 0m7.683s 00:14:13.669 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:13.669 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:13.669 ************************************ 00:14:13.669 END TEST nvmf_example 00:14:13.669 ************************************ 00:14:13.669 10:06:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:13.669 10:06:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:13.669 10:06:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:13.669 10:06:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:13.931 ************************************ 00:14:13.931 START TEST nvmf_filesystem 00:14:13.931 ************************************ 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:13.931 * Looking for test storage... 00:14:13.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:13.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.931 --rc genhtml_branch_coverage=1 00:14:13.931 --rc genhtml_function_coverage=1 00:14:13.931 --rc genhtml_legend=1 00:14:13.931 --rc geninfo_all_blocks=1 00:14:13.931 --rc geninfo_unexecuted_blocks=1 00:14:13.931 00:14:13.931 ' 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:13.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.931 --rc genhtml_branch_coverage=1 00:14:13.931 --rc genhtml_function_coverage=1 00:14:13.931 --rc genhtml_legend=1 00:14:13.931 --rc geninfo_all_blocks=1 00:14:13.931 --rc geninfo_unexecuted_blocks=1 00:14:13.931 00:14:13.931 ' 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:13.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.931 --rc genhtml_branch_coverage=1 00:14:13.931 --rc genhtml_function_coverage=1 00:14:13.931 --rc genhtml_legend=1 00:14:13.931 --rc geninfo_all_blocks=1 00:14:13.931 --rc geninfo_unexecuted_blocks=1 00:14:13.931 00:14:13.931 ' 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:13.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.931 --rc genhtml_branch_coverage=1 00:14:13.931 --rc genhtml_function_coverage=1 00:14:13.931 --rc genhtml_legend=1 00:14:13.931 --rc geninfo_all_blocks=1 00:14:13.931 --rc geninfo_unexecuted_blocks=1 00:14:13.931 00:14:13.931 ' 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:13.931 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:13.932 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:14.196 #define SPDK_CONFIG_H 00:14:14.196 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:14.196 #define SPDK_CONFIG_APPS 1 00:14:14.196 #define SPDK_CONFIG_ARCH native 00:14:14.196 #undef SPDK_CONFIG_ASAN 00:14:14.196 #undef SPDK_CONFIG_AVAHI 00:14:14.196 #undef SPDK_CONFIG_CET 00:14:14.196 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:14.196 #define SPDK_CONFIG_COVERAGE 1 00:14:14.196 #define SPDK_CONFIG_CROSS_PREFIX 00:14:14.196 #undef SPDK_CONFIG_CRYPTO 00:14:14.196 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:14.196 #undef SPDK_CONFIG_CUSTOMOCF 00:14:14.196 #undef SPDK_CONFIG_DAOS 00:14:14.196 #define SPDK_CONFIG_DAOS_DIR 00:14:14.196 #define SPDK_CONFIG_DEBUG 1 00:14:14.196 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:14.196 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:14.196 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:14.196 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:14.196 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:14.196 #undef SPDK_CONFIG_DPDK_UADK 00:14:14.196 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:14.196 #define SPDK_CONFIG_EXAMPLES 1 00:14:14.196 #undef SPDK_CONFIG_FC 00:14:14.196 #define SPDK_CONFIG_FC_PATH 00:14:14.196 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:14.196 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:14.196 #define SPDK_CONFIG_FSDEV 1 00:14:14.196 #undef SPDK_CONFIG_FUSE 00:14:14.196 #undef SPDK_CONFIG_FUZZER 00:14:14.196 #define SPDK_CONFIG_FUZZER_LIB 00:14:14.196 #undef SPDK_CONFIG_GOLANG 00:14:14.196 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:14.196 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:14.196 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:14.196 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:14.196 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:14.196 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:14.196 #undef SPDK_CONFIG_HAVE_LZ4 00:14:14.196 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:14.196 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:14.196 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:14.196 #define SPDK_CONFIG_IDXD 1 00:14:14.196 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:14.196 #undef SPDK_CONFIG_IPSEC_MB 00:14:14.196 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:14.196 #define SPDK_CONFIG_ISAL 1 00:14:14.196 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:14.196 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:14.196 #define SPDK_CONFIG_LIBDIR 00:14:14.196 #undef SPDK_CONFIG_LTO 00:14:14.196 #define SPDK_CONFIG_MAX_LCORES 128 00:14:14.196 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:14.196 #define SPDK_CONFIG_NVME_CUSE 1 00:14:14.196 #undef SPDK_CONFIG_OCF 00:14:14.196 #define SPDK_CONFIG_OCF_PATH 00:14:14.196 #define SPDK_CONFIG_OPENSSL_PATH 00:14:14.196 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:14.196 #define SPDK_CONFIG_PGO_DIR 00:14:14.196 #undef SPDK_CONFIG_PGO_USE 00:14:14.196 #define SPDK_CONFIG_PREFIX /usr/local 00:14:14.196 #undef SPDK_CONFIG_RAID5F 00:14:14.196 #undef SPDK_CONFIG_RBD 00:14:14.196 #define SPDK_CONFIG_RDMA 1 00:14:14.196 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:14.196 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:14.196 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:14.196 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:14.196 #define SPDK_CONFIG_SHARED 1 00:14:14.196 #undef SPDK_CONFIG_SMA 00:14:14.196 #define SPDK_CONFIG_TESTS 1 00:14:14.196 #undef SPDK_CONFIG_TSAN 00:14:14.196 #define SPDK_CONFIG_UBLK 1 00:14:14.196 #define SPDK_CONFIG_UBSAN 1 00:14:14.196 #undef SPDK_CONFIG_UNIT_TESTS 00:14:14.196 #undef SPDK_CONFIG_URING 00:14:14.196 #define SPDK_CONFIG_URING_PATH 00:14:14.196 #undef SPDK_CONFIG_URING_ZNS 00:14:14.196 #undef SPDK_CONFIG_USDT 00:14:14.196 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:14.196 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:14.196 #define SPDK_CONFIG_VFIO_USER 1 00:14:14.196 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:14.196 #define SPDK_CONFIG_VHOST 1 00:14:14.196 #define SPDK_CONFIG_VIRTIO 1 00:14:14.196 #undef SPDK_CONFIG_VTUNE 00:14:14.196 #define SPDK_CONFIG_VTUNE_DIR 00:14:14.196 #define SPDK_CONFIG_WERROR 1 00:14:14.196 #define SPDK_CONFIG_WPDK_DIR 00:14:14.196 #undef SPDK_CONFIG_XNVME 00:14:14.196 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.196 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:14.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:14.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3761783 ]] 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3761783 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.OLAtUf 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.OLAtUf/tests/target /tmp/spdk.OLAtUf 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122240618496 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356550144 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=7115931648 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666906624 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847697408 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23613440 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.199 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677519360 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678277120 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=757760 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:14:14.200 * Looking for test storage... 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122240618496 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9330524160 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:14.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.200 --rc genhtml_branch_coverage=1 00:14:14.200 --rc genhtml_function_coverage=1 00:14:14.200 --rc genhtml_legend=1 00:14:14.200 --rc geninfo_all_blocks=1 00:14:14.200 --rc geninfo_unexecuted_blocks=1 00:14:14.200 00:14:14.200 ' 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:14.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.200 --rc genhtml_branch_coverage=1 00:14:14.200 --rc genhtml_function_coverage=1 00:14:14.200 --rc genhtml_legend=1 00:14:14.200 --rc geninfo_all_blocks=1 00:14:14.200 --rc geninfo_unexecuted_blocks=1 00:14:14.200 00:14:14.200 ' 00:14:14.200 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:14.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.200 --rc genhtml_branch_coverage=1 00:14:14.200 --rc genhtml_function_coverage=1 00:14:14.200 --rc genhtml_legend=1 00:14:14.200 --rc geninfo_all_blocks=1 00:14:14.201 --rc geninfo_unexecuted_blocks=1 00:14:14.201 00:14:14.201 ' 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:14.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.201 --rc genhtml_branch_coverage=1 00:14:14.201 --rc genhtml_function_coverage=1 00:14:14.201 --rc genhtml_legend=1 00:14:14.201 --rc geninfo_all_blocks=1 00:14:14.201 --rc geninfo_unexecuted_blocks=1 00:14:14.201 00:14:14.201 ' 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:14.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:14.201 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:14.462 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.462 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.462 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.462 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:14.462 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:14.462 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:14:14.462 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:22.602 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:22.602 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:22.602 Found net devices under 0000:31:00.0: cvl_0_0 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:22.602 Found net devices under 0000:31:00.1: cvl_0_1 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:14:22.602 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:22.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:14:22.603 00:14:22.603 --- 10.0.0.2 ping statistics --- 00:14:22.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.603 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:22.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:14:22.603 00:14:22.603 --- 10.0.0.1 ping statistics --- 00:14:22.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.603 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:22.603 ************************************ 00:14:22.603 START TEST nvmf_filesystem_no_in_capsule 00:14:22.603 ************************************ 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3766097 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3766097 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3766097 ']' 00:14:22.603 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.604 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:22.604 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.604 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:22.604 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:22.604 [2024-11-06 10:06:25.945037] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:14:22.604 [2024-11-06 10:06:25.945099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.604 [2024-11-06 10:06:26.035540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.604 [2024-11-06 10:06:26.076667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.604 [2024-11-06 10:06:26.076704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.604 [2024-11-06 10:06:26.076712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.604 [2024-11-06 10:06:26.076720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.604 [2024-11-06 10:06:26.076725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.604 [2024-11-06 10:06:26.078572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.604 [2024-11-06 10:06:26.078689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.604 [2024-11-06 10:06:26.078841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.604 [2024-11-06 10:06:26.078841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.547 [2024-11-06 10:06:26.797648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.547 Malloc1 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.547 [2024-11-06 10:06:26.933518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:14:23.547 { 00:14:23.547 "name": "Malloc1", 00:14:23.547 "aliases": [ 00:14:23.547 "1fdf3dd4-2729-4d14-a381-b926e82aa740" 00:14:23.547 ], 00:14:23.547 "product_name": "Malloc disk", 00:14:23.547 "block_size": 512, 00:14:23.547 "num_blocks": 1048576, 00:14:23.547 "uuid": "1fdf3dd4-2729-4d14-a381-b926e82aa740", 00:14:23.547 "assigned_rate_limits": { 00:14:23.547 "rw_ios_per_sec": 0, 00:14:23.547 "rw_mbytes_per_sec": 0, 00:14:23.547 "r_mbytes_per_sec": 0, 00:14:23.547 "w_mbytes_per_sec": 0 00:14:23.547 }, 00:14:23.547 "claimed": true, 00:14:23.547 "claim_type": "exclusive_write", 00:14:23.547 "zoned": false, 00:14:23.547 "supported_io_types": { 00:14:23.547 "read": true, 00:14:23.547 "write": true, 00:14:23.547 "unmap": true, 00:14:23.547 "flush": true, 00:14:23.547 "reset": true, 00:14:23.547 "nvme_admin": false, 00:14:23.547 "nvme_io": false, 00:14:23.547 "nvme_io_md": false, 00:14:23.547 "write_zeroes": true, 00:14:23.547 "zcopy": true, 00:14:23.547 "get_zone_info": false, 00:14:23.547 "zone_management": false, 00:14:23.547 "zone_append": false, 00:14:23.547 "compare": false, 00:14:23.547 "compare_and_write": false, 00:14:23.547 "abort": true, 00:14:23.547 "seek_hole": false, 00:14:23.547 "seek_data": false, 00:14:23.547 "copy": true, 00:14:23.547 "nvme_iov_md": false 00:14:23.547 }, 00:14:23.547 "memory_domains": [ 00:14:23.547 { 00:14:23.547 "dma_device_id": "system", 00:14:23.547 "dma_device_type": 1 00:14:23.547 }, 00:14:23.547 { 00:14:23.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.547 "dma_device_type": 2 00:14:23.547 } 00:14:23.547 ], 00:14:23.547 "driver_specific": {} 00:14:23.547 } 00:14:23.547 ]' 00:14:23.547 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:14:23.547 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:14:23.547 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:14:23.808 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:14:23.808 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:14:23.808 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:14:23.808 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:23.808 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:25.191 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:25.191 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:14:25.191 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.191 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:25.191 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:14:27.107 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:27.107 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:27.107 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.107 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:27.107 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.107 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:14:27.107 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:27.107 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:27.107 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:27.366 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:27.366 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:27.366 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:27.366 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:27.366 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:27.366 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:27.366 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:27.366 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:27.366 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:28.306 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:29.250 ************************************ 00:14:29.250 START TEST filesystem_ext4 00:14:29.250 ************************************ 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:14:29.250 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:29.250 mke2fs 1.47.0 (5-Feb-2023) 00:14:29.250 Discarding device blocks: 0/522240 done 00:14:29.250 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:29.250 Filesystem UUID: 05781c2a-a45c-4102-8471-09a68b66c3e9 00:14:29.250 Superblock backups stored on blocks: 00:14:29.250 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:29.250 00:14:29.250 Allocating group tables: 0/64 done 00:14:29.250 Writing inode tables: 0/64 done 00:14:29.510 Creating journal (8192 blocks): done 00:14:29.510 Writing superblocks and filesystem accounting information: 0/64 done 00:14:29.510 00:14:29.510 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:14:29.510 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3766097 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:36.149 00:14:36.149 real 0m5.805s 00:14:36.149 user 0m0.028s 00:14:36.149 sys 0m0.081s 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:36.149 ************************************ 00:14:36.149 END TEST filesystem_ext4 00:14:36.149 ************************************ 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:36.149 ************************************ 00:14:36.149 START TEST filesystem_btrfs 00:14:36.149 ************************************ 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:14:36.149 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:14:36.150 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:14:36.150 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:14:36.150 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:36.150 btrfs-progs v6.8.1 00:14:36.150 See https://btrfs.readthedocs.io for more information. 00:14:36.150 00:14:36.150 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:36.150 NOTE: several default settings have changed in version 5.15, please make sure 00:14:36.150 this does not affect your deployments: 00:14:36.150 - DUP for metadata (-m dup) 00:14:36.150 - enabled no-holes (-O no-holes) 00:14:36.150 - enabled free-space-tree (-R free-space-tree) 00:14:36.150 00:14:36.150 Label: (null) 00:14:36.150 UUID: 805812dd-0ce9-4a63-adc9-3c54350a1e23 00:14:36.150 Node size: 16384 00:14:36.150 Sector size: 4096 (CPU page size: 4096) 00:14:36.150 Filesystem size: 510.00MiB 00:14:36.150 Block group profiles: 00:14:36.150 Data: single 8.00MiB 00:14:36.150 Metadata: DUP 32.00MiB 00:14:36.150 System: DUP 8.00MiB 00:14:36.150 SSD detected: yes 00:14:36.150 Zoned device: no 00:14:36.150 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:36.150 Checksum: crc32c 00:14:36.150 Number of devices: 1 00:14:36.150 Devices: 00:14:36.150 ID SIZE PATH 00:14:36.150 1 510.00MiB /dev/nvme0n1p1 00:14:36.150 00:14:36.150 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:14:36.150 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:36.721 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:36.721 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:36.721 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:36.721 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:36.721 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:36.721 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3766097 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:36.721 00:14:36.721 real 0m1.498s 00:14:36.721 user 0m0.026s 00:14:36.721 sys 0m0.123s 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:36.721 ************************************ 00:14:36.721 END TEST filesystem_btrfs 00:14:36.721 ************************************ 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:36.721 ************************************ 00:14:36.721 START TEST filesystem_xfs 00:14:36.721 ************************************ 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:14:36.721 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:36.721 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:36.721 = sectsz=512 attr=2, projid32bit=1 00:14:36.721 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:36.721 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:36.721 data = bsize=4096 blocks=130560, imaxpct=25 00:14:36.721 = sunit=0 swidth=0 blks 00:14:36.721 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:36.721 log =internal log bsize=4096 blocks=16384, version=2 00:14:36.721 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:36.721 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:38.104 Discarding blocks...Done. 00:14:38.104 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:14:38.104 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3766097 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:40.019 00:14:40.019 real 0m3.173s 00:14:40.019 user 0m0.034s 00:14:40.019 sys 0m0.073s 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:40.019 ************************************ 00:14:40.019 END TEST filesystem_xfs 00:14:40.019 ************************************ 00:14:40.019 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:40.280 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:40.280 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:40.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.280 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:40.280 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:14:40.280 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:40.280 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.280 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:40.280 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3766097 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3766097 ']' 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3766097 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3766097 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3766097' 00:14:40.563 killing process with pid 3766097 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3766097 00:14:40.563 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3766097 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:40.918 00:14:40.918 real 0m18.203s 00:14:40.918 user 1m11.847s 00:14:40.918 sys 0m1.489s 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:40.918 ************************************ 00:14:40.918 END TEST nvmf_filesystem_no_in_capsule 00:14:40.918 ************************************ 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.918 ************************************ 00:14:40.918 START TEST nvmf_filesystem_in_capsule 00:14:40.918 ************************************ 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3769924 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3769924 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3769924 ']' 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:40.918 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:40.918 [2024-11-06 10:06:44.232791] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:14:40.918 [2024-11-06 10:06:44.232846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.918 [2024-11-06 10:06:44.321089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.918 [2024-11-06 10:06:44.360406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.918 [2024-11-06 10:06:44.360443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.918 [2024-11-06 10:06:44.360456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.918 [2024-11-06 10:06:44.360463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.918 [2024-11-06 10:06:44.360468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.918 [2024-11-06 10:06:44.361979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.918 [2024-11-06 10:06:44.362095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.918 [2024-11-06 10:06:44.362249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.918 [2024-11-06 10:06:44.362250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:41.881 [2024-11-06 10:06:45.072168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:41.881 Malloc1 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.881 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:41.882 [2024-11-06 10:06:45.208293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:14:41.882 { 00:14:41.882 "name": "Malloc1", 00:14:41.882 "aliases": [ 00:14:41.882 "c85d6d33-6124-49d6-8a15-ce2b318625ae" 00:14:41.882 ], 00:14:41.882 "product_name": "Malloc disk", 00:14:41.882 "block_size": 512, 00:14:41.882 "num_blocks": 1048576, 00:14:41.882 "uuid": "c85d6d33-6124-49d6-8a15-ce2b318625ae", 00:14:41.882 "assigned_rate_limits": { 00:14:41.882 "rw_ios_per_sec": 0, 00:14:41.882 "rw_mbytes_per_sec": 0, 00:14:41.882 "r_mbytes_per_sec": 0, 00:14:41.882 "w_mbytes_per_sec": 0 00:14:41.882 }, 00:14:41.882 "claimed": true, 00:14:41.882 "claim_type": "exclusive_write", 00:14:41.882 "zoned": false, 00:14:41.882 "supported_io_types": { 00:14:41.882 "read": true, 00:14:41.882 "write": true, 00:14:41.882 "unmap": true, 00:14:41.882 "flush": true, 00:14:41.882 "reset": true, 00:14:41.882 "nvme_admin": false, 00:14:41.882 "nvme_io": false, 00:14:41.882 "nvme_io_md": false, 00:14:41.882 "write_zeroes": true, 00:14:41.882 "zcopy": true, 00:14:41.882 "get_zone_info": false, 00:14:41.882 "zone_management": false, 00:14:41.882 "zone_append": false, 00:14:41.882 "compare": false, 00:14:41.882 "compare_and_write": false, 00:14:41.882 "abort": true, 00:14:41.882 "seek_hole": false, 00:14:41.882 "seek_data": false, 00:14:41.882 "copy": true, 00:14:41.882 "nvme_iov_md": false 00:14:41.882 }, 00:14:41.882 "memory_domains": [ 00:14:41.882 { 00:14:41.882 "dma_device_id": "system", 00:14:41.882 "dma_device_type": 1 00:14:41.882 }, 00:14:41.882 { 00:14:41.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.882 "dma_device_type": 2 00:14:41.882 } 00:14:41.882 ], 00:14:41.882 "driver_specific": {} 00:14:41.882 } 00:14:41.882 ]' 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:41.882 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:43.793 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:43.793 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:14:43.793 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.793 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:43.793 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:45.704 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:45.965 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:46.535 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.476 ************************************ 00:14:47.476 START TEST filesystem_in_capsule_ext4 00:14:47.476 ************************************ 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:14:47.476 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:47.476 mke2fs 1.47.0 (5-Feb-2023) 00:14:47.736 Discarding device blocks: 0/522240 done 00:14:47.736 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:47.736 Filesystem UUID: d334c3ea-1978-411f-9145-fc85fbc610c5 00:14:47.736 Superblock backups stored on blocks: 00:14:47.736 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:47.736 00:14:47.736 Allocating group tables: 0/64 done 00:14:47.736 Writing inode tables: 0/64 done 00:14:47.736 Creating journal (8192 blocks): done 00:14:47.736 Writing superblocks and filesystem accounting information: 0/64 done 00:14:47.736 00:14:47.736 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:14:47.736 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3769924 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:54.316 00:14:54.316 real 0m5.847s 00:14:54.316 user 0m0.027s 00:14:54.316 sys 0m0.078s 00:14:54.316 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:54.317 ************************************ 00:14:54.317 END TEST filesystem_in_capsule_ext4 00:14:54.317 ************************************ 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:54.317 ************************************ 00:14:54.317 START TEST filesystem_in_capsule_btrfs 00:14:54.317 ************************************ 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:14:54.317 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:54.317 btrfs-progs v6.8.1 00:14:54.317 See https://btrfs.readthedocs.io for more information. 00:14:54.317 00:14:54.317 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:54.317 NOTE: several default settings have changed in version 5.15, please make sure 00:14:54.317 this does not affect your deployments: 00:14:54.317 - DUP for metadata (-m dup) 00:14:54.317 - enabled no-holes (-O no-holes) 00:14:54.317 - enabled free-space-tree (-R free-space-tree) 00:14:54.317 00:14:54.317 Label: (null) 00:14:54.317 UUID: 79bc3719-f5e2-4205-9d7b-3f1082c575a5 00:14:54.317 Node size: 16384 00:14:54.317 Sector size: 4096 (CPU page size: 4096) 00:14:54.317 Filesystem size: 510.00MiB 00:14:54.317 Block group profiles: 00:14:54.317 Data: single 8.00MiB 00:14:54.317 Metadata: DUP 32.00MiB 00:14:54.317 System: DUP 8.00MiB 00:14:54.317 SSD detected: yes 00:14:54.317 Zoned device: no 00:14:54.317 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:54.317 Checksum: crc32c 00:14:54.317 Number of devices: 1 00:14:54.317 Devices: 00:14:54.317 ID SIZE PATH 00:14:54.317 1 510.00MiB /dev/nvme0n1p1 00:14:54.317 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3769924 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:54.317 00:14:54.317 real 0m0.853s 00:14:54.317 user 0m0.035s 00:14:54.317 sys 0m0.118s 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:54.317 ************************************ 00:14:54.317 END TEST filesystem_in_capsule_btrfs 00:14:54.317 ************************************ 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:54.317 ************************************ 00:14:54.317 START TEST filesystem_in_capsule_xfs 00:14:54.317 ************************************ 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:14:54.317 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:54.578 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:54.578 = sectsz=512 attr=2, projid32bit=1 00:14:54.578 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:54.578 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:54.578 data = bsize=4096 blocks=130560, imaxpct=25 00:14:54.578 = sunit=0 swidth=0 blks 00:14:54.578 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:54.578 log =internal log bsize=4096 blocks=16384, version=2 00:14:54.578 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:54.578 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:55.518 Discarding blocks...Done. 00:14:55.518 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:14:55.518 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3769924 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:57.428 00:14:57.428 real 0m3.014s 00:14:57.428 user 0m0.030s 00:14:57.428 sys 0m0.078s 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:57.428 ************************************ 00:14:57.428 END TEST filesystem_in_capsule_xfs 00:14:57.428 ************************************ 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:57.428 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:57.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3769924 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3769924 ']' 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3769924 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:57.999 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3769924 00:14:58.259 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:58.259 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:58.259 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3769924' 00:14:58.259 killing process with pid 3769924 00:14:58.259 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3769924 00:14:58.259 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3769924 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:58.520 00:14:58.520 real 0m17.605s 00:14:58.520 user 1m9.531s 00:14:58.520 sys 0m1.422s 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:58.520 ************************************ 00:14:58.520 END TEST nvmf_filesystem_in_capsule 00:14:58.520 ************************************ 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:58.520 rmmod nvme_tcp 00:14:58.520 rmmod nvme_fabrics 00:14:58.520 rmmod nvme_keyring 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.520 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.062 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:01.062 00:15:01.062 real 0m46.761s 00:15:01.062 user 2m23.785s 00:15:01.062 sys 0m9.345s 00:15:01.063 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:01.063 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:01.063 ************************************ 00:15:01.063 END TEST nvmf_filesystem 00:15:01.063 ************************************ 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.063 ************************************ 00:15:01.063 START TEST nvmf_target_discovery 00:15:01.063 ************************************ 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:01.063 * Looking for test storage... 00:15:01.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:01.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.063 --rc genhtml_branch_coverage=1 00:15:01.063 --rc genhtml_function_coverage=1 00:15:01.063 --rc genhtml_legend=1 00:15:01.063 --rc geninfo_all_blocks=1 00:15:01.063 --rc geninfo_unexecuted_blocks=1 00:15:01.063 00:15:01.063 ' 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:01.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.063 --rc genhtml_branch_coverage=1 00:15:01.063 --rc genhtml_function_coverage=1 00:15:01.063 --rc genhtml_legend=1 00:15:01.063 --rc geninfo_all_blocks=1 00:15:01.063 --rc geninfo_unexecuted_blocks=1 00:15:01.063 00:15:01.063 ' 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:01.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.063 --rc genhtml_branch_coverage=1 00:15:01.063 --rc genhtml_function_coverage=1 00:15:01.063 --rc genhtml_legend=1 00:15:01.063 --rc geninfo_all_blocks=1 00:15:01.063 --rc geninfo_unexecuted_blocks=1 00:15:01.063 00:15:01.063 ' 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:01.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.063 --rc genhtml_branch_coverage=1 00:15:01.063 --rc genhtml_function_coverage=1 00:15:01.063 --rc genhtml_legend=1 00:15:01.063 --rc geninfo_all_blocks=1 00:15:01.063 --rc geninfo_unexecuted_blocks=1 00:15:01.063 00:15:01.063 ' 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.063 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:15:01.064 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:09.198 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:09.198 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:09.198 Found net devices under 0000:31:00.0: cvl_0_0 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:09.198 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:09.199 Found net devices under 0000:31:00.1: cvl_0_1 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:09.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:15:09.199 00:15:09.199 --- 10.0.0.2 ping statistics --- 00:15:09.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.199 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:15:09.199 00:15:09.199 --- 10.0.0.1 ping statistics --- 00:15:09.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.199 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:09.199 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3778279 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3778279 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3778279 ']' 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:09.459 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.459 [2024-11-06 10:07:12.775644] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:09.459 [2024-11-06 10:07:12.775710] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.459 [2024-11-06 10:07:12.865503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.459 [2024-11-06 10:07:12.906794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.459 [2024-11-06 10:07:12.906831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.459 [2024-11-06 10:07:12.906840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.459 [2024-11-06 10:07:12.906847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.459 [2024-11-06 10:07:12.906854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.459 [2024-11-06 10:07:12.908552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.459 [2024-11-06 10:07:12.908671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.459 [2024-11-06 10:07:12.908827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.459 [2024-11-06 10:07:12.908828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.399 [2024-11-06 10:07:13.623484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:10.399 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 Null1 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 [2024-11-06 10:07:13.683804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 Null2 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 Null3 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 Null4 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.400 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:10.661 00:15:10.662 Discovery Log Number of Records 6, Generation counter 6 00:15:10.662 =====Discovery Log Entry 0====== 00:15:10.662 trtype: tcp 00:15:10.662 adrfam: ipv4 00:15:10.662 subtype: current discovery subsystem 00:15:10.662 treq: not required 00:15:10.662 portid: 0 00:15:10.662 trsvcid: 4420 00:15:10.662 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:10.662 traddr: 10.0.0.2 00:15:10.662 eflags: explicit discovery connections, duplicate discovery information 00:15:10.662 sectype: none 00:15:10.662 =====Discovery Log Entry 1====== 00:15:10.662 trtype: tcp 00:15:10.662 adrfam: ipv4 00:15:10.662 subtype: nvme subsystem 00:15:10.662 treq: not required 00:15:10.662 portid: 0 00:15:10.662 trsvcid: 4420 00:15:10.662 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:10.662 traddr: 10.0.0.2 00:15:10.662 eflags: none 00:15:10.662 sectype: none 00:15:10.662 =====Discovery Log Entry 2====== 00:15:10.662 trtype: tcp 00:15:10.662 adrfam: ipv4 00:15:10.662 subtype: nvme subsystem 00:15:10.662 treq: not required 00:15:10.662 portid: 0 00:15:10.662 trsvcid: 4420 00:15:10.662 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:10.662 traddr: 10.0.0.2 00:15:10.662 eflags: none 00:15:10.662 sectype: none 00:15:10.662 =====Discovery Log Entry 3====== 00:15:10.662 trtype: tcp 00:15:10.662 adrfam: ipv4 00:15:10.662 subtype: nvme subsystem 00:15:10.662 treq: not required 00:15:10.662 portid: 0 00:15:10.662 trsvcid: 4420 00:15:10.662 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:10.662 traddr: 10.0.0.2 00:15:10.662 eflags: none 00:15:10.662 sectype: none 00:15:10.662 =====Discovery Log Entry 4====== 00:15:10.662 trtype: tcp 00:15:10.662 adrfam: ipv4 00:15:10.662 subtype: nvme subsystem 00:15:10.662 treq: not required 00:15:10.662 portid: 0 00:15:10.662 trsvcid: 4420 00:15:10.662 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:10.662 traddr: 10.0.0.2 00:15:10.662 eflags: none 00:15:10.662 sectype: none 00:15:10.662 =====Discovery Log Entry 5====== 00:15:10.662 trtype: tcp 00:15:10.662 adrfam: ipv4 00:15:10.662 subtype: discovery subsystem referral 00:15:10.662 treq: not required 00:15:10.662 portid: 0 00:15:10.662 trsvcid: 4430 00:15:10.662 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:10.662 traddr: 10.0.0.2 00:15:10.662 eflags: none 00:15:10.662 sectype: none 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:10.662 Perform nvmf subsystem discovery via RPC 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.662 [ 00:15:10.662 { 00:15:10.662 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:10.662 "subtype": "Discovery", 00:15:10.662 "listen_addresses": [ 00:15:10.662 { 00:15:10.662 "trtype": "TCP", 00:15:10.662 "adrfam": "IPv4", 00:15:10.662 "traddr": "10.0.0.2", 00:15:10.662 "trsvcid": "4420" 00:15:10.662 } 00:15:10.662 ], 00:15:10.662 "allow_any_host": true, 00:15:10.662 "hosts": [] 00:15:10.662 }, 00:15:10.662 { 00:15:10.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.662 "subtype": "NVMe", 00:15:10.662 "listen_addresses": [ 00:15:10.662 { 00:15:10.662 "trtype": "TCP", 00:15:10.662 "adrfam": "IPv4", 00:15:10.662 "traddr": "10.0.0.2", 00:15:10.662 "trsvcid": "4420" 00:15:10.662 } 00:15:10.662 ], 00:15:10.662 "allow_any_host": true, 00:15:10.662 "hosts": [], 00:15:10.662 "serial_number": "SPDK00000000000001", 00:15:10.662 "model_number": "SPDK bdev Controller", 00:15:10.662 "max_namespaces": 32, 00:15:10.662 "min_cntlid": 1, 00:15:10.662 "max_cntlid": 65519, 00:15:10.662 "namespaces": [ 00:15:10.662 { 00:15:10.662 "nsid": 1, 00:15:10.662 "bdev_name": "Null1", 00:15:10.662 "name": "Null1", 00:15:10.662 "nguid": "CE66CD99B31644B1BDDBDBCE39DFB378", 00:15:10.662 "uuid": "ce66cd99-b316-44b1-bddb-dbce39dfb378" 00:15:10.662 } 00:15:10.662 ] 00:15:10.662 }, 00:15:10.662 { 00:15:10.662 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:10.662 "subtype": "NVMe", 00:15:10.662 "listen_addresses": [ 00:15:10.662 { 00:15:10.662 "trtype": "TCP", 00:15:10.662 "adrfam": "IPv4", 00:15:10.662 "traddr": "10.0.0.2", 00:15:10.662 "trsvcid": "4420" 00:15:10.662 } 00:15:10.662 ], 00:15:10.662 "allow_any_host": true, 00:15:10.662 "hosts": [], 00:15:10.662 "serial_number": "SPDK00000000000002", 00:15:10.662 "model_number": "SPDK bdev Controller", 00:15:10.662 "max_namespaces": 32, 00:15:10.662 "min_cntlid": 1, 00:15:10.662 "max_cntlid": 65519, 00:15:10.662 "namespaces": [ 00:15:10.662 { 00:15:10.662 "nsid": 1, 00:15:10.662 "bdev_name": "Null2", 00:15:10.662 "name": "Null2", 00:15:10.662 "nguid": "8F6E40DA0ED64AAFBD45081CB78EF964", 00:15:10.662 "uuid": "8f6e40da-0ed6-4aaf-bd45-081cb78ef964" 00:15:10.662 } 00:15:10.662 ] 00:15:10.662 }, 00:15:10.662 { 00:15:10.662 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:10.662 "subtype": "NVMe", 00:15:10.662 "listen_addresses": [ 00:15:10.662 { 00:15:10.662 "trtype": "TCP", 00:15:10.662 "adrfam": "IPv4", 00:15:10.662 "traddr": "10.0.0.2", 00:15:10.662 "trsvcid": "4420" 00:15:10.662 } 00:15:10.662 ], 00:15:10.662 "allow_any_host": true, 00:15:10.662 "hosts": [], 00:15:10.662 "serial_number": "SPDK00000000000003", 00:15:10.662 "model_number": "SPDK bdev Controller", 00:15:10.662 "max_namespaces": 32, 00:15:10.662 "min_cntlid": 1, 00:15:10.662 "max_cntlid": 65519, 00:15:10.662 "namespaces": [ 00:15:10.662 { 00:15:10.662 "nsid": 1, 00:15:10.662 "bdev_name": "Null3", 00:15:10.662 "name": "Null3", 00:15:10.662 "nguid": "47B5362F92094DDAAA6B19BF7CF41CD4", 00:15:10.662 "uuid": "47b5362f-9209-4dda-aa6b-19bf7cf41cd4" 00:15:10.662 } 00:15:10.662 ] 00:15:10.662 }, 00:15:10.662 { 00:15:10.662 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:10.662 "subtype": "NVMe", 00:15:10.662 "listen_addresses": [ 00:15:10.662 { 00:15:10.662 "trtype": "TCP", 00:15:10.662 "adrfam": "IPv4", 00:15:10.662 "traddr": "10.0.0.2", 00:15:10.662 "trsvcid": "4420" 00:15:10.662 } 00:15:10.662 ], 00:15:10.662 "allow_any_host": true, 00:15:10.662 "hosts": [], 00:15:10.662 "serial_number": "SPDK00000000000004", 00:15:10.662 "model_number": "SPDK bdev Controller", 00:15:10.662 "max_namespaces": 32, 00:15:10.662 "min_cntlid": 1, 00:15:10.662 "max_cntlid": 65519, 00:15:10.662 "namespaces": [ 00:15:10.662 { 00:15:10.662 "nsid": 1, 00:15:10.662 "bdev_name": "Null4", 00:15:10.662 "name": "Null4", 00:15:10.662 "nguid": "3EAFF8D5DECF4DD4B8B687434A30FFD8", 00:15:10.662 "uuid": "3eaff8d5-decf-4dd4-b8b6-87434a30ffd8" 00:15:10.662 } 00:15:10.662 ] 00:15:10.662 } 00:15:10.662 ] 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.662 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:10.923 rmmod nvme_tcp 00:15:10.923 rmmod nvme_fabrics 00:15:10.923 rmmod nvme_keyring 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3778279 ']' 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3778279 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3778279 ']' 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3778279 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3778279 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3778279' 00:15:10.923 killing process with pid 3778279 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3778279 00:15:10.923 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3778279 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.183 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:13.724 00:15:13.724 real 0m12.570s 00:15:13.724 user 0m9.096s 00:15:13.724 sys 0m6.770s 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:13.724 ************************************ 00:15:13.724 END TEST nvmf_target_discovery 00:15:13.724 ************************************ 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:13.724 ************************************ 00:15:13.724 START TEST nvmf_referrals 00:15:13.724 ************************************ 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:13.724 * Looking for test storage... 00:15:13.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.724 --rc genhtml_branch_coverage=1 00:15:13.724 --rc genhtml_function_coverage=1 00:15:13.724 --rc genhtml_legend=1 00:15:13.724 --rc geninfo_all_blocks=1 00:15:13.724 --rc geninfo_unexecuted_blocks=1 00:15:13.724 00:15:13.724 ' 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.724 --rc genhtml_branch_coverage=1 00:15:13.724 --rc genhtml_function_coverage=1 00:15:13.724 --rc genhtml_legend=1 00:15:13.724 --rc geninfo_all_blocks=1 00:15:13.724 --rc geninfo_unexecuted_blocks=1 00:15:13.724 00:15:13.724 ' 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.724 --rc genhtml_branch_coverage=1 00:15:13.724 --rc genhtml_function_coverage=1 00:15:13.724 --rc genhtml_legend=1 00:15:13.724 --rc geninfo_all_blocks=1 00:15:13.724 --rc geninfo_unexecuted_blocks=1 00:15:13.724 00:15:13.724 ' 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.724 --rc genhtml_branch_coverage=1 00:15:13.724 --rc genhtml_function_coverage=1 00:15:13.724 --rc genhtml_legend=1 00:15:13.724 --rc geninfo_all_blocks=1 00:15:13.724 --rc geninfo_unexecuted_blocks=1 00:15:13.724 00:15:13.724 ' 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.724 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:13.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:15:13.725 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.865 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:21.866 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:21.866 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:21.866 Found net devices under 0000:31:00.0: cvl_0_0 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:21.866 Found net devices under 0000:31:00.1: cvl_0_1 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:21.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:15:21.866 00:15:21.866 --- 10.0.0.2 ping statistics --- 00:15:21.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.866 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:21.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:15:21.866 00:15:21.866 --- 10.0.0.1 ping statistics --- 00:15:21.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.866 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:21.866 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3783361 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3783361 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3783361 ']' 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.128 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:22.128 [2024-11-06 10:07:25.453125] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:22.128 [2024-11-06 10:07:25.453191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.128 [2024-11-06 10:07:25.555075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.128 [2024-11-06 10:07:25.597283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.128 [2024-11-06 10:07:25.597321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.128 [2024-11-06 10:07:25.597330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.128 [2024-11-06 10:07:25.597337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.128 [2024-11-06 10:07:25.597343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.128 [2024-11-06 10:07:25.598929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.128 [2024-11-06 10:07:25.599042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.128 [2024-11-06 10:07:25.599044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.128 [2024-11-06 10:07:25.598981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.072 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:23.072 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:15:23.072 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:23.072 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.073 [2024-11-06 10:07:26.309805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.073 [2024-11-06 10:07:26.326037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:23.073 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:23.333 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:23.333 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:23.333 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:15:23.333 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.333 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.333 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:23.334 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:23.594 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:23.594 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.594 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:23.594 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:23.594 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:23.594 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:23.594 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:23.594 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:23.594 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:23.594 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:23.854 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:23.854 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:23.854 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:23.854 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:23.854 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:23.854 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:23.854 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:24.115 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:24.115 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:24.115 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:24.115 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:24.115 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:24.115 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:24.377 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:24.638 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:24.638 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:24.638 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:24.638 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:24.638 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:24.638 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:24.638 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:24.639 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:24.639 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:24.639 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:24.639 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:24.639 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:24.639 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:24.899 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:25.160 rmmod nvme_tcp 00:15:25.160 rmmod nvme_fabrics 00:15:25.160 rmmod nvme_keyring 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3783361 ']' 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3783361 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3783361 ']' 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3783361 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:25.160 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3783361 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3783361' 00:15:25.422 killing process with pid 3783361 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3783361 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3783361 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.422 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.970 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:27.970 00:15:27.970 real 0m14.196s 00:15:27.970 user 0m16.167s 00:15:27.970 sys 0m7.235s 00:15:27.970 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:27.970 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:27.970 ************************************ 00:15:27.970 END TEST nvmf_referrals 00:15:27.970 ************************************ 00:15:27.970 10:07:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:27.970 10:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:27.970 10:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:27.971 10:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.971 ************************************ 00:15:27.971 START TEST nvmf_connect_disconnect 00:15:27.971 ************************************ 00:15:27.971 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:27.971 * Looking for test storage... 00:15:27.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.971 --rc genhtml_branch_coverage=1 00:15:27.971 --rc genhtml_function_coverage=1 00:15:27.971 --rc genhtml_legend=1 00:15:27.971 --rc geninfo_all_blocks=1 00:15:27.971 --rc geninfo_unexecuted_blocks=1 00:15:27.971 00:15:27.971 ' 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.971 --rc genhtml_branch_coverage=1 00:15:27.971 --rc genhtml_function_coverage=1 00:15:27.971 --rc genhtml_legend=1 00:15:27.971 --rc geninfo_all_blocks=1 00:15:27.971 --rc geninfo_unexecuted_blocks=1 00:15:27.971 00:15:27.971 ' 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.971 --rc genhtml_branch_coverage=1 00:15:27.971 --rc genhtml_function_coverage=1 00:15:27.971 --rc genhtml_legend=1 00:15:27.971 --rc geninfo_all_blocks=1 00:15:27.971 --rc geninfo_unexecuted_blocks=1 00:15:27.971 00:15:27.971 ' 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.971 --rc genhtml_branch_coverage=1 00:15:27.971 --rc genhtml_function_coverage=1 00:15:27.971 --rc genhtml_legend=1 00:15:27.971 --rc geninfo_all_blocks=1 00:15:27.971 --rc geninfo_unexecuted_blocks=1 00:15:27.971 00:15:27.971 ' 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.971 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:15:27.972 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:36.120 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:36.120 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:36.120 Found net devices under 0000:31:00.0: cvl_0_0 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:36.120 Found net devices under 0000:31:00.1: cvl_0_1 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:36.120 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:36.121 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:36.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:15:36.382 00:15:36.382 --- 10.0.0.2 ping statistics --- 00:15:36.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.382 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:15:36.382 00:15:36.382 --- 10.0.0.1 ping statistics --- 00:15:36.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.382 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3788986 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3788986 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3788986 ']' 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:36.382 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.382 [2024-11-06 10:07:39.861123] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:36.382 [2024-11-06 10:07:39.861189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.642 [2024-11-06 10:07:39.955684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.642 [2024-11-06 10:07:39.997868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.642 [2024-11-06 10:07:39.997902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.642 [2024-11-06 10:07:39.997911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.642 [2024-11-06 10:07:39.997918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.642 [2024-11-06 10:07:39.997924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.642 [2024-11-06 10:07:39.999584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.642 [2024-11-06 10:07:39.999706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.642 [2024-11-06 10:07:39.999866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.642 [2024-11-06 10:07:39.999880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.215 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:37.215 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:15:37.215 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:37.215 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:37.215 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:37.215 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.215 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:37.215 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.215 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:37.476 [2024-11-06 10:07:40.717705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.476 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.476 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:37.476 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.476 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:37.476 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:37.477 [2024-11-06 10:07:40.792088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:15:37.477 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:41.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:55.926 rmmod nvme_tcp 00:15:55.926 rmmod nvme_fabrics 00:15:55.926 rmmod nvme_keyring 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3788986 ']' 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3788986 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3788986 ']' 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3788986 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3788986 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3788986' 00:15:55.926 killing process with pid 3788986 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3788986 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3788986 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.926 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.471 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:58.471 00:15:58.471 real 0m30.456s 00:15:58.471 user 1m19.415s 00:15:58.471 sys 0m7.943s 00:15:58.471 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:58.471 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:58.471 ************************************ 00:15:58.471 END TEST nvmf_connect_disconnect 00:15:58.471 ************************************ 00:15:58.471 10:08:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:58.471 10:08:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:58.471 10:08:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:58.471 10:08:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:58.471 ************************************ 00:15:58.471 START TEST nvmf_multitarget 00:15:58.471 ************************************ 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:58.472 * Looking for test storage... 00:15:58.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:58.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.472 --rc genhtml_branch_coverage=1 00:15:58.472 --rc genhtml_function_coverage=1 00:15:58.472 --rc genhtml_legend=1 00:15:58.472 --rc geninfo_all_blocks=1 00:15:58.472 --rc geninfo_unexecuted_blocks=1 00:15:58.472 00:15:58.472 ' 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:58.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.472 --rc genhtml_branch_coverage=1 00:15:58.472 --rc genhtml_function_coverage=1 00:15:58.472 --rc genhtml_legend=1 00:15:58.472 --rc geninfo_all_blocks=1 00:15:58.472 --rc geninfo_unexecuted_blocks=1 00:15:58.472 00:15:58.472 ' 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:58.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.472 --rc genhtml_branch_coverage=1 00:15:58.472 --rc genhtml_function_coverage=1 00:15:58.472 --rc genhtml_legend=1 00:15:58.472 --rc geninfo_all_blocks=1 00:15:58.472 --rc geninfo_unexecuted_blocks=1 00:15:58.472 00:15:58.472 ' 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:58.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.472 --rc genhtml_branch_coverage=1 00:15:58.472 --rc genhtml_function_coverage=1 00:15:58.472 --rc genhtml_legend=1 00:15:58.472 --rc geninfo_all_blocks=1 00:15:58.472 --rc geninfo_unexecuted_blocks=1 00:15:58.472 00:15:58.472 ' 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:58.472 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:58.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:58.473 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:06.614 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.614 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:06.614 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:06.614 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:06.614 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:06.614 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:06.614 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:06.615 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:06.615 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:06.615 Found net devices under 0000:31:00.0: cvl_0_0 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:06.615 Found net devices under 0000:31:00.1: cvl_0_1 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.615 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.615 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.615 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.615 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:06.615 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:06.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:16:06.876 00:16:06.876 --- 10.0.0.2 ping statistics --- 00:16:06.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.876 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:16:06.876 00:16:06.876 --- 10.0.0.1 ping statistics --- 00:16:06.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.876 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3797589 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3797589 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3797589 ']' 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:06.876 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.877 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:06.877 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:06.877 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:06.877 [2024-11-06 10:08:10.339758] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:16:06.877 [2024-11-06 10:08:10.339826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.137 [2024-11-06 10:08:10.430374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.137 [2024-11-06 10:08:10.472568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.137 [2024-11-06 10:08:10.472606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.137 [2024-11-06 10:08:10.472614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.137 [2024-11-06 10:08:10.472620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.137 [2024-11-06 10:08:10.472627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.137 [2024-11-06 10:08:10.474248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.137 [2024-11-06 10:08:10.474363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.137 [2024-11-06 10:08:10.474520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.137 [2024-11-06 10:08:10.474521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.708 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:07.708 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:16:07.708 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:07.708 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:07.708 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:07.708 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.708 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:07.708 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:07.708 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:07.969 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:07.969 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:07.969 "nvmf_tgt_1" 00:16:07.969 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:08.230 "nvmf_tgt_2" 00:16:08.230 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:08.230 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:08.230 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:08.230 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:08.230 true 00:16:08.230 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:08.492 true 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:08.492 rmmod nvme_tcp 00:16:08.492 rmmod nvme_fabrics 00:16:08.492 rmmod nvme_keyring 00:16:08.492 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:08.752 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:08.752 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:08.752 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3797589 ']' 00:16:08.752 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3797589 00:16:08.752 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3797589 ']' 00:16:08.752 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3797589 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3797589 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3797589' 00:16:08.752 killing process with pid 3797589 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3797589 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3797589 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.752 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:11.295 00:16:11.295 real 0m12.766s 00:16:11.295 user 0m10.075s 00:16:11.295 sys 0m6.905s 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:11.295 ************************************ 00:16:11.295 END TEST nvmf_multitarget 00:16:11.295 ************************************ 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:11.295 ************************************ 00:16:11.295 START TEST nvmf_rpc 00:16:11.295 ************************************ 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:11.295 * Looking for test storage... 00:16:11.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.295 --rc genhtml_branch_coverage=1 00:16:11.295 --rc genhtml_function_coverage=1 00:16:11.295 --rc genhtml_legend=1 00:16:11.295 --rc geninfo_all_blocks=1 00:16:11.295 --rc geninfo_unexecuted_blocks=1 00:16:11.295 00:16:11.295 ' 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.295 --rc genhtml_branch_coverage=1 00:16:11.295 --rc genhtml_function_coverage=1 00:16:11.295 --rc genhtml_legend=1 00:16:11.295 --rc geninfo_all_blocks=1 00:16:11.295 --rc geninfo_unexecuted_blocks=1 00:16:11.295 00:16:11.295 ' 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.295 --rc genhtml_branch_coverage=1 00:16:11.295 --rc genhtml_function_coverage=1 00:16:11.295 --rc genhtml_legend=1 00:16:11.295 --rc geninfo_all_blocks=1 00:16:11.295 --rc geninfo_unexecuted_blocks=1 00:16:11.295 00:16:11.295 ' 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.295 --rc genhtml_branch_coverage=1 00:16:11.295 --rc genhtml_function_coverage=1 00:16:11.295 --rc genhtml_legend=1 00:16:11.295 --rc geninfo_all_blocks=1 00:16:11.295 --rc geninfo_unexecuted_blocks=1 00:16:11.295 00:16:11.295 ' 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.295 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:11.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:11.296 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.431 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.431 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:19.431 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:19.431 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:19.431 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:19.431 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:19.431 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:19.431 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:19.431 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:19.431 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:19.431 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:19.432 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:19.432 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:19.432 Found net devices under 0000:31:00.0: cvl_0_0 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:19.432 Found net devices under 0000:31:00.1: cvl_0_1 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.432 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.692 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.692 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.692 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:19.692 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.692 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.692 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.692 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:19.692 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:19.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:16:19.692 00:16:19.692 --- 10.0.0.2 ping statistics --- 00:16:19.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.692 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:16:19.692 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:16:19.953 00:16:19.953 --- 10.0.0.1 ping statistics --- 00:16:19.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.953 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3802763 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3802763 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3802763 ']' 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:19.953 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.953 [2024-11-06 10:08:23.313774] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:16:19.953 [2024-11-06 10:08:23.313840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.953 [2024-11-06 10:08:23.410320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.953 [2024-11-06 10:08:23.451867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.953 [2024-11-06 10:08:23.451908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.953 [2024-11-06 10:08:23.451916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.953 [2024-11-06 10:08:23.451923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.953 [2024-11-06 10:08:23.451929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.953 [2024-11-06 10:08:23.453555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.953 [2024-11-06 10:08:23.453676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.213 [2024-11-06 10:08:23.453837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.213 [2024-11-06 10:08:23.453838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.782 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:20.782 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:16:20.782 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:20.782 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.782 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.782 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.782 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:20.782 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.782 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.782 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.782 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:20.782 "tick_rate": 2400000000, 00:16:20.782 "poll_groups": [ 00:16:20.782 { 00:16:20.782 "name": "nvmf_tgt_poll_group_000", 00:16:20.782 "admin_qpairs": 0, 00:16:20.782 "io_qpairs": 0, 00:16:20.782 "current_admin_qpairs": 0, 00:16:20.782 "current_io_qpairs": 0, 00:16:20.782 "pending_bdev_io": 0, 00:16:20.782 "completed_nvme_io": 0, 00:16:20.782 "transports": [] 00:16:20.782 }, 00:16:20.782 { 00:16:20.782 "name": "nvmf_tgt_poll_group_001", 00:16:20.782 "admin_qpairs": 0, 00:16:20.782 "io_qpairs": 0, 00:16:20.782 "current_admin_qpairs": 0, 00:16:20.782 "current_io_qpairs": 0, 00:16:20.782 "pending_bdev_io": 0, 00:16:20.782 "completed_nvme_io": 0, 00:16:20.782 "transports": [] 00:16:20.782 }, 00:16:20.782 { 00:16:20.782 "name": "nvmf_tgt_poll_group_002", 00:16:20.782 "admin_qpairs": 0, 00:16:20.782 "io_qpairs": 0, 00:16:20.782 "current_admin_qpairs": 0, 00:16:20.782 "current_io_qpairs": 0, 00:16:20.782 "pending_bdev_io": 0, 00:16:20.782 "completed_nvme_io": 0, 00:16:20.782 "transports": [] 00:16:20.782 }, 00:16:20.782 { 00:16:20.782 "name": "nvmf_tgt_poll_group_003", 00:16:20.782 "admin_qpairs": 0, 00:16:20.782 "io_qpairs": 0, 00:16:20.782 "current_admin_qpairs": 0, 00:16:20.782 "current_io_qpairs": 0, 00:16:20.782 "pending_bdev_io": 0, 00:16:20.783 "completed_nvme_io": 0, 00:16:20.783 "transports": [] 00:16:20.783 } 00:16:20.783 ] 00:16:20.783 }' 00:16:20.783 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:20.783 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:20.783 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:20.783 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:20.783 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:20.783 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:20.783 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:20.783 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:20.783 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.783 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.048 [2024-11-06 10:08:24.285465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:21.048 "tick_rate": 2400000000, 00:16:21.048 "poll_groups": [ 00:16:21.048 { 00:16:21.048 "name": "nvmf_tgt_poll_group_000", 00:16:21.048 "admin_qpairs": 0, 00:16:21.048 "io_qpairs": 0, 00:16:21.048 "current_admin_qpairs": 0, 00:16:21.048 "current_io_qpairs": 0, 00:16:21.048 "pending_bdev_io": 0, 00:16:21.048 "completed_nvme_io": 0, 00:16:21.048 "transports": [ 00:16:21.048 { 00:16:21.048 "trtype": "TCP" 00:16:21.048 } 00:16:21.048 ] 00:16:21.048 }, 00:16:21.048 { 00:16:21.048 "name": "nvmf_tgt_poll_group_001", 00:16:21.048 "admin_qpairs": 0, 00:16:21.048 "io_qpairs": 0, 00:16:21.048 "current_admin_qpairs": 0, 00:16:21.048 "current_io_qpairs": 0, 00:16:21.048 "pending_bdev_io": 0, 00:16:21.048 "completed_nvme_io": 0, 00:16:21.048 "transports": [ 00:16:21.048 { 00:16:21.048 "trtype": "TCP" 00:16:21.048 } 00:16:21.048 ] 00:16:21.048 }, 00:16:21.048 { 00:16:21.048 "name": "nvmf_tgt_poll_group_002", 00:16:21.048 "admin_qpairs": 0, 00:16:21.048 "io_qpairs": 0, 00:16:21.048 "current_admin_qpairs": 0, 00:16:21.048 "current_io_qpairs": 0, 00:16:21.048 "pending_bdev_io": 0, 00:16:21.048 "completed_nvme_io": 0, 00:16:21.048 "transports": [ 00:16:21.048 { 00:16:21.048 "trtype": "TCP" 00:16:21.048 } 00:16:21.048 ] 00:16:21.048 }, 00:16:21.048 { 00:16:21.048 "name": "nvmf_tgt_poll_group_003", 00:16:21.048 "admin_qpairs": 0, 00:16:21.048 "io_qpairs": 0, 00:16:21.048 "current_admin_qpairs": 0, 00:16:21.048 "current_io_qpairs": 0, 00:16:21.048 "pending_bdev_io": 0, 00:16:21.048 "completed_nvme_io": 0, 00:16:21.048 "transports": [ 00:16:21.048 { 00:16:21.048 "trtype": "TCP" 00:16:21.048 } 00:16:21.048 ] 00:16:21.048 } 00:16:21.048 ] 00:16:21.048 }' 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.048 Malloc1 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.048 [2024-11-06 10:08:24.487284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.048 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:21.049 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:16:21.049 [2024-11-06 10:08:24.524298] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:16:21.310 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:21.310 could not add new controller: failed to write to nvme-fabrics device 00:16:21.310 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:21.310 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.310 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.310 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.310 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:21.310 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.310 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.310 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.310 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:22.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:22.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:22.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.235 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.236 [2024-11-06 10:08:28.280808] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:16:25.236 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:25.236 could not add new controller: failed to write to nvme-fabrics device 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.236 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:26.628 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:26.628 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:26.628 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.628 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:26.628 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:28.540 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:28.540 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:28.540 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:28.540 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:28.540 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.540 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:28.540 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.540 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:28.540 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:28.540 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:28.540 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.540 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:28.540 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.801 [2024-11-06 10:08:32.085947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.801 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:30.186 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:30.186 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:30.186 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:30.186 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:30.186 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:32.727 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.728 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.728 [2024-11-06 10:08:35.820560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.728 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.728 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:32.728 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.728 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.728 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.728 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:32.728 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.728 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.728 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.728 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.111 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:34.111 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:34.111 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.111 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:34.111 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:36.021 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:36.021 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:36.021 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.021 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:36.021 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.021 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:36.021 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:36.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.021 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:36.021 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:36.021 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:36.021 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.284 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.285 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.285 [2024-11-06 10:08:39.589397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.285 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.285 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:36.285 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.285 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.285 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.285 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:36.285 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.285 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.285 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.285 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.197 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.197 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:38.197 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.197 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:38.197 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 [2024-11-06 10:08:43.394067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.107 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.489 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:41.489 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:41.489 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.489 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:41.489 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:44.030 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:44.030 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:44.030 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:44.030 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:44.030 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.030 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:44.030 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:44.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.030 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.031 [2024-11-06 10:08:47.122794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.031 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.031 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:44.031 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.031 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.031 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.031 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:44.031 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.031 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.031 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.031 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:45.411 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:45.411 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:45.411 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:45.411 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:45.411 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:47.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.325 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.585 [2024-11-06 10:08:50.888449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.585 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 [2024-11-06 10:08:50.956623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 [2024-11-06 10:08:51.024805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.586 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 [2024-11-06 10:08:51.088991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 [2024-11-06 10:08:51.157220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.847 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:47.847 "tick_rate": 2400000000, 00:16:47.847 "poll_groups": [ 00:16:47.847 { 00:16:47.847 "name": "nvmf_tgt_poll_group_000", 00:16:47.847 "admin_qpairs": 0, 00:16:47.847 "io_qpairs": 224, 00:16:47.847 "current_admin_qpairs": 0, 00:16:47.847 "current_io_qpairs": 0, 00:16:47.847 "pending_bdev_io": 0, 00:16:47.847 "completed_nvme_io": 500, 00:16:47.847 "transports": [ 00:16:47.847 { 00:16:47.847 "trtype": "TCP" 00:16:47.847 } 00:16:47.847 ] 00:16:47.847 }, 00:16:47.847 { 00:16:47.847 "name": "nvmf_tgt_poll_group_001", 00:16:47.847 "admin_qpairs": 1, 00:16:47.847 "io_qpairs": 223, 00:16:47.847 "current_admin_qpairs": 0, 00:16:47.847 "current_io_qpairs": 0, 00:16:47.847 "pending_bdev_io": 0, 00:16:47.847 "completed_nvme_io": 275, 00:16:47.847 "transports": [ 00:16:47.847 { 00:16:47.847 "trtype": "TCP" 00:16:47.847 } 00:16:47.848 ] 00:16:47.848 }, 00:16:47.848 { 00:16:47.848 "name": "nvmf_tgt_poll_group_002", 00:16:47.848 "admin_qpairs": 6, 00:16:47.848 "io_qpairs": 218, 00:16:47.848 "current_admin_qpairs": 0, 00:16:47.848 "current_io_qpairs": 0, 00:16:47.848 "pending_bdev_io": 0, 00:16:47.848 "completed_nvme_io": 222, 00:16:47.848 "transports": [ 00:16:47.848 { 00:16:47.848 "trtype": "TCP" 00:16:47.848 } 00:16:47.848 ] 00:16:47.848 }, 00:16:47.848 { 00:16:47.848 "name": "nvmf_tgt_poll_group_003", 00:16:47.848 "admin_qpairs": 0, 00:16:47.848 "io_qpairs": 224, 00:16:47.848 "current_admin_qpairs": 0, 00:16:47.848 "current_io_qpairs": 0, 00:16:47.848 "pending_bdev_io": 0, 00:16:47.848 "completed_nvme_io": 242, 00:16:47.848 "transports": [ 00:16:47.848 { 00:16:47.848 "trtype": "TCP" 00:16:47.848 } 00:16:47.848 ] 00:16:47.848 } 00:16:47.848 ] 00:16:47.848 }' 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.848 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:47.848 rmmod nvme_tcp 00:16:47.848 rmmod nvme_fabrics 00:16:48.108 rmmod nvme_keyring 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3802763 ']' 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3802763 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3802763 ']' 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3802763 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3802763 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3802763' 00:16:48.108 killing process with pid 3802763 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3802763 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3802763 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.108 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:50.785 00:16:50.785 real 0m39.312s 00:16:50.785 user 1m54.717s 00:16:50.785 sys 0m8.783s 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.785 ************************************ 00:16:50.785 END TEST nvmf_rpc 00:16:50.785 ************************************ 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.785 ************************************ 00:16:50.785 START TEST nvmf_invalid 00:16:50.785 ************************************ 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:50.785 * Looking for test storage... 00:16:50.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:50.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.785 --rc genhtml_branch_coverage=1 00:16:50.785 --rc genhtml_function_coverage=1 00:16:50.785 --rc genhtml_legend=1 00:16:50.785 --rc geninfo_all_blocks=1 00:16:50.785 --rc geninfo_unexecuted_blocks=1 00:16:50.785 00:16:50.785 ' 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:50.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.785 --rc genhtml_branch_coverage=1 00:16:50.785 --rc genhtml_function_coverage=1 00:16:50.785 --rc genhtml_legend=1 00:16:50.785 --rc geninfo_all_blocks=1 00:16:50.785 --rc geninfo_unexecuted_blocks=1 00:16:50.785 00:16:50.785 ' 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:50.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.785 --rc genhtml_branch_coverage=1 00:16:50.785 --rc genhtml_function_coverage=1 00:16:50.785 --rc genhtml_legend=1 00:16:50.785 --rc geninfo_all_blocks=1 00:16:50.785 --rc geninfo_unexecuted_blocks=1 00:16:50.785 00:16:50.785 ' 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:50.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.785 --rc genhtml_branch_coverage=1 00:16:50.785 --rc genhtml_function_coverage=1 00:16:50.785 --rc genhtml_legend=1 00:16:50.785 --rc geninfo_all_blocks=1 00:16:50.785 --rc geninfo_unexecuted_blocks=1 00:16:50.785 00:16:50.785 ' 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.785 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:50.786 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:58.925 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:58.925 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:58.925 Found net devices under 0000:31:00.0: cvl_0_0 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:58.925 Found net devices under 0000:31:00.1: cvl_0_1 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:58.925 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:59.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:16:59.186 00:16:59.186 --- 10.0.0.2 ping statistics --- 00:16:59.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.186 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:16:59.186 00:16:59.186 --- 10.0.0.1 ping statistics --- 00:16:59.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.186 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3813307 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3813307 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3813307 ']' 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:59.186 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:59.186 [2024-11-06 10:09:02.618128] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:16:59.186 [2024-11-06 10:09:02.618196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.447 [2024-11-06 10:09:02.708401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.447 [2024-11-06 10:09:02.749688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.447 [2024-11-06 10:09:02.749724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.447 [2024-11-06 10:09:02.749732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.447 [2024-11-06 10:09:02.749739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.447 [2024-11-06 10:09:02.749745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.447 [2024-11-06 10:09:02.751614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.447 [2024-11-06 10:09:02.751734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.447 [2024-11-06 10:09:02.751761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.447 [2024-11-06 10:09:02.751764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.017 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:00.017 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:17:00.017 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:00.017 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:00.017 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:00.017 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.017 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:00.017 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17671 00:17:00.278 [2024-11-06 10:09:03.622915] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:00.278 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:00.278 { 00:17:00.278 "nqn": "nqn.2016-06.io.spdk:cnode17671", 00:17:00.278 "tgt_name": "foobar", 00:17:00.278 "method": "nvmf_create_subsystem", 00:17:00.278 "req_id": 1 00:17:00.278 } 00:17:00.278 Got JSON-RPC error response 00:17:00.278 response: 00:17:00.278 { 00:17:00.278 "code": -32603, 00:17:00.278 "message": "Unable to find target foobar" 00:17:00.278 }' 00:17:00.278 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:00.278 { 00:17:00.278 "nqn": "nqn.2016-06.io.spdk:cnode17671", 00:17:00.278 "tgt_name": "foobar", 00:17:00.278 "method": "nvmf_create_subsystem", 00:17:00.278 "req_id": 1 00:17:00.278 } 00:17:00.278 Got JSON-RPC error response 00:17:00.278 response: 00:17:00.278 { 00:17:00.278 "code": -32603, 00:17:00.278 "message": "Unable to find target foobar" 00:17:00.278 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:00.278 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:00.278 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24140 00:17:00.539 [2024-11-06 10:09:03.815574] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24140: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:00.539 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:00.539 { 00:17:00.539 "nqn": "nqn.2016-06.io.spdk:cnode24140", 00:17:00.539 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:00.539 "method": "nvmf_create_subsystem", 00:17:00.539 "req_id": 1 00:17:00.539 } 00:17:00.539 Got JSON-RPC error response 00:17:00.539 response: 00:17:00.539 { 00:17:00.539 "code": -32602, 00:17:00.539 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:00.539 }' 00:17:00.539 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:00.539 { 00:17:00.539 "nqn": "nqn.2016-06.io.spdk:cnode24140", 00:17:00.539 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:00.539 "method": "nvmf_create_subsystem", 00:17:00.539 "req_id": 1 00:17:00.539 } 00:17:00.539 Got JSON-RPC error response 00:17:00.539 response: 00:17:00.539 { 00:17:00.539 "code": -32602, 00:17:00.539 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:00.539 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:00.539 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:00.539 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12211 00:17:00.539 [2024-11-06 10:09:04.008178] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12211: invalid model number 'SPDK_Controller' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:00.799 { 00:17:00.799 "nqn": "nqn.2016-06.io.spdk:cnode12211", 00:17:00.799 "model_number": "SPDK_Controller\u001f", 00:17:00.799 "method": "nvmf_create_subsystem", 00:17:00.799 "req_id": 1 00:17:00.799 } 00:17:00.799 Got JSON-RPC error response 00:17:00.799 response: 00:17:00.799 { 00:17:00.799 "code": -32602, 00:17:00.799 "message": "Invalid MN SPDK_Controller\u001f" 00:17:00.799 }' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:00.799 { 00:17:00.799 "nqn": "nqn.2016-06.io.spdk:cnode12211", 00:17:00.799 "model_number": "SPDK_Controller\u001f", 00:17:00.799 "method": "nvmf_create_subsystem", 00:17:00.799 "req_id": 1 00:17:00.799 } 00:17:00.799 Got JSON-RPC error response 00:17:00.799 response: 00:17:00.799 { 00:17:00.799 "code": -32602, 00:17:00.799 "message": "Invalid MN SPDK_Controller\u001f" 00:17:00.799 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:00.799 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'hC9~f+/m<'\''Sz!I C!Y|i' 00:17:00.800 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'hC9~f+/m<'\''Sz!I C!Y|i' nqn.2016-06.io.spdk:cnode10902 00:17:01.060 [2024-11-06 10:09:04.361320] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10902: invalid serial number 'hC9~f+/m<'Sz!I C!Y|i' 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:01.060 { 00:17:01.060 "nqn": "nqn.2016-06.io.spdk:cnode10902", 00:17:01.060 "serial_number": "hC9~f\u007f+/m<'\''Sz!I C!Y|i", 00:17:01.060 "method": "nvmf_create_subsystem", 00:17:01.060 "req_id": 1 00:17:01.060 } 00:17:01.060 Got JSON-RPC error response 00:17:01.060 response: 00:17:01.060 { 00:17:01.060 "code": -32602, 00:17:01.060 "message": "Invalid SN hC9~f\u007f+/m<'\''Sz!I C!Y|i" 00:17:01.060 }' 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:01.060 { 00:17:01.060 "nqn": "nqn.2016-06.io.spdk:cnode10902", 00:17:01.060 "serial_number": "hC9~f\u007f+/m<'Sz!I C!Y|i", 00:17:01.060 "method": "nvmf_create_subsystem", 00:17:01.060 "req_id": 1 00:17:01.060 } 00:17:01.060 Got JSON-RPC error response 00:17:01.060 response: 00:17:01.060 { 00:17:01.060 "code": -32602, 00:17:01.060 "message": "Invalid SN hC9~f\u007f+/m<'Sz!I C!Y|i" 00:17:01.060 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:01.060 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.061 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:01.322 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:17:01.323 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'N"`!6oK7;A"sYF+GG^Wa /dev/null' 00:17:03.406 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.317 10:09:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:05.577 00:17:05.577 real 0m15.074s 00:17:05.577 user 0m20.799s 00:17:05.577 sys 0m7.382s 00:17:05.577 10:09:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:05.577 10:09:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:05.577 ************************************ 00:17:05.577 END TEST nvmf_invalid 00:17:05.577 ************************************ 00:17:05.577 10:09:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:05.577 10:09:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:05.577 10:09:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:05.577 10:09:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:05.577 ************************************ 00:17:05.577 START TEST nvmf_connect_stress 00:17:05.577 ************************************ 00:17:05.577 10:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:05.577 * Looking for test storage... 00:17:05.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:05.577 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:05.577 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:17:05.577 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.838 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:05.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.839 --rc genhtml_branch_coverage=1 00:17:05.839 --rc genhtml_function_coverage=1 00:17:05.839 --rc genhtml_legend=1 00:17:05.839 --rc geninfo_all_blocks=1 00:17:05.839 --rc geninfo_unexecuted_blocks=1 00:17:05.839 00:17:05.839 ' 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:05.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.839 --rc genhtml_branch_coverage=1 00:17:05.839 --rc genhtml_function_coverage=1 00:17:05.839 --rc genhtml_legend=1 00:17:05.839 --rc geninfo_all_blocks=1 00:17:05.839 --rc geninfo_unexecuted_blocks=1 00:17:05.839 00:17:05.839 ' 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:05.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.839 --rc genhtml_branch_coverage=1 00:17:05.839 --rc genhtml_function_coverage=1 00:17:05.839 --rc genhtml_legend=1 00:17:05.839 --rc geninfo_all_blocks=1 00:17:05.839 --rc geninfo_unexecuted_blocks=1 00:17:05.839 00:17:05.839 ' 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:05.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.839 --rc genhtml_branch_coverage=1 00:17:05.839 --rc genhtml_function_coverage=1 00:17:05.839 --rc genhtml_legend=1 00:17:05.839 --rc geninfo_all_blocks=1 00:17:05.839 --rc geninfo_unexecuted_blocks=1 00:17:05.839 00:17:05.839 ' 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:05.839 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.975 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.975 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:13.975 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:13.975 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:13.975 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:13.975 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:13.976 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:13.976 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:13.976 Found net devices under 0000:31:00.0: cvl_0_0 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:13.976 Found net devices under 0000:31:00.1: cvl_0_1 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:13.976 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:14.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:17:14.238 00:17:14.238 --- 10.0.0.2 ping statistics --- 00:17:14.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.238 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:17:14.238 00:17:14.238 --- 10.0.0.1 ping statistics --- 00:17:14.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.238 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3819480 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3819480 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3819480 ']' 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:14.238 10:09:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.238 [2024-11-06 10:09:17.711215] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:17:14.238 [2024-11-06 10:09:17.711281] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.500 [2024-11-06 10:09:17.826737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:14.500 [2024-11-06 10:09:17.878113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.500 [2024-11-06 10:09:17.878167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.500 [2024-11-06 10:09:17.878177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.500 [2024-11-06 10:09:17.878186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.500 [2024-11-06 10:09:17.878193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.500 [2024-11-06 10:09:17.880107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.500 [2024-11-06 10:09:17.880361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.500 [2024-11-06 10:09:17.880365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.070 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:15.070 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:17:15.070 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:15.070 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:15.070 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.070 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.070 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:15.070 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.070 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.070 [2024-11-06 10:09:18.568179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.331 [2024-11-06 10:09:18.592617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.331 NULL1 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3819640 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.331 10:09:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.591 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.591 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:15.591 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.591 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.591 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.852 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.852 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:15.852 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.852 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.111 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.370 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.370 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:16.370 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.370 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.370 10:09:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.629 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.629 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:16.629 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.629 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.629 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.889 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.889 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:16.889 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.889 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.889 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.459 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.459 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:17.459 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.459 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.459 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.719 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.719 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:17.719 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.719 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.719 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.980 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.980 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:17.980 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.980 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.980 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.240 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.240 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:18.240 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.240 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.240 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.500 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.500 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:18.500 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.500 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.500 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.071 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.071 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:19.071 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.071 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.071 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.331 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.331 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:19.331 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.331 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.331 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.594 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.594 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:19.594 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.594 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.594 10:09:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.855 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.855 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:19.855 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.855 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.855 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.122 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.122 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:20.122 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.122 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.122 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.692 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.692 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:20.692 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.692 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.692 10:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.952 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.952 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:20.952 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.952 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.952 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.213 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.213 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:21.213 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.213 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.213 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.475 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.475 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:21.475 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.475 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.475 10:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.736 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.736 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:21.736 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.736 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.736 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.308 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.308 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:22.308 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.308 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.308 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.568 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.568 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:22.568 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.568 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.568 10:09:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.829 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.829 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:22.829 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.829 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.829 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.090 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.090 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:23.090 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.090 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.090 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.350 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.350 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:23.350 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.350 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.350 10:09:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.921 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.921 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:23.921 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.921 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.921 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.182 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.182 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:24.182 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.182 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.182 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.442 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.442 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:24.442 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.442 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.442 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.704 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.704 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:24.704 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.704 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.704 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.964 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.964 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:24.964 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.964 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.964 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.534 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3819640 00:17:25.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3819640) - No such process 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3819640 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.534 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:25.534 rmmod nvme_tcp 00:17:25.535 rmmod nvme_fabrics 00:17:25.535 rmmod nvme_keyring 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3819480 ']' 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3819480 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3819480 ']' 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3819480 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3819480 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3819480' 00:17:25.535 killing process with pid 3819480 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3819480 00:17:25.535 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3819480 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.795 10:09:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.705 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:27.705 00:17:27.705 real 0m22.224s 00:17:27.705 user 0m42.401s 00:17:27.705 sys 0m9.936s 00:17:27.705 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:27.705 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.705 ************************************ 00:17:27.705 END TEST nvmf_connect_stress 00:17:27.705 ************************************ 00:17:27.705 10:09:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:27.705 10:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:27.705 10:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:27.705 10:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.967 ************************************ 00:17:27.967 START TEST nvmf_fused_ordering 00:17:27.967 ************************************ 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:27.967 * Looking for test storage... 00:17:27.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:27.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.967 --rc genhtml_branch_coverage=1 00:17:27.967 --rc genhtml_function_coverage=1 00:17:27.967 --rc genhtml_legend=1 00:17:27.967 --rc geninfo_all_blocks=1 00:17:27.967 --rc geninfo_unexecuted_blocks=1 00:17:27.967 00:17:27.967 ' 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:27.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.967 --rc genhtml_branch_coverage=1 00:17:27.967 --rc genhtml_function_coverage=1 00:17:27.967 --rc genhtml_legend=1 00:17:27.967 --rc geninfo_all_blocks=1 00:17:27.967 --rc geninfo_unexecuted_blocks=1 00:17:27.967 00:17:27.967 ' 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:27.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.967 --rc genhtml_branch_coverage=1 00:17:27.967 --rc genhtml_function_coverage=1 00:17:27.967 --rc genhtml_legend=1 00:17:27.967 --rc geninfo_all_blocks=1 00:17:27.967 --rc geninfo_unexecuted_blocks=1 00:17:27.967 00:17:27.967 ' 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:27.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.967 --rc genhtml_branch_coverage=1 00:17:27.967 --rc genhtml_function_coverage=1 00:17:27.967 --rc genhtml_legend=1 00:17:27.967 --rc geninfo_all_blocks=1 00:17:27.967 --rc geninfo_unexecuted_blocks=1 00:17:27.967 00:17:27.967 ' 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.967 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.968 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:36.110 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:36.110 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:36.110 Found net devices under 0000:31:00.0: cvl_0_0 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.110 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:36.111 Found net devices under 0000:31:00.1: cvl_0_1 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.111 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.370 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.370 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.370 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:36.370 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.370 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.370 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.370 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:36.370 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:36.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:17:36.370 00:17:36.370 --- 10.0.0.2 ping statistics --- 00:17:36.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.370 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:17:36.370 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:17:36.370 00:17:36.370 --- 10.0.0.1 ping statistics --- 00:17:36.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.370 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:17:36.370 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3826618 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3826618 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3826618 ']' 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:36.631 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.631 [2024-11-06 10:09:39.985261] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:17:36.631 [2024-11-06 10:09:39.985327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.631 [2024-11-06 10:09:40.097231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.891 [2024-11-06 10:09:40.153632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.891 [2024-11-06 10:09:40.153693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.891 [2024-11-06 10:09:40.153703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.891 [2024-11-06 10:09:40.153710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.891 [2024-11-06 10:09:40.153716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.891 [2024-11-06 10:09:40.154513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.461 [2024-11-06 10:09:40.854148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.461 [2024-11-06 10:09:40.878443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.461 NULL1 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.461 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.462 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:37.462 [2024-11-06 10:09:40.950141] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:17:37.462 [2024-11-06 10:09:40.950210] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3826720 ] 00:17:38.031 Attached to nqn.2016-06.io.spdk:cnode1 00:17:38.031 Namespace ID: 1 size: 1GB 00:17:38.031 fused_ordering(0) 00:17:38.031 fused_ordering(1) 00:17:38.031 fused_ordering(2) 00:17:38.031 fused_ordering(3) 00:17:38.031 fused_ordering(4) 00:17:38.031 fused_ordering(5) 00:17:38.031 fused_ordering(6) 00:17:38.031 fused_ordering(7) 00:17:38.031 fused_ordering(8) 00:17:38.031 fused_ordering(9) 00:17:38.031 fused_ordering(10) 00:17:38.031 fused_ordering(11) 00:17:38.031 fused_ordering(12) 00:17:38.031 fused_ordering(13) 00:17:38.031 fused_ordering(14) 00:17:38.031 fused_ordering(15) 00:17:38.031 fused_ordering(16) 00:17:38.031 fused_ordering(17) 00:17:38.031 fused_ordering(18) 00:17:38.031 fused_ordering(19) 00:17:38.031 fused_ordering(20) 00:17:38.031 fused_ordering(21) 00:17:38.031 fused_ordering(22) 00:17:38.031 fused_ordering(23) 00:17:38.031 fused_ordering(24) 00:17:38.031 fused_ordering(25) 00:17:38.031 fused_ordering(26) 00:17:38.031 fused_ordering(27) 00:17:38.031 fused_ordering(28) 00:17:38.031 fused_ordering(29) 00:17:38.031 fused_ordering(30) 00:17:38.031 fused_ordering(31) 00:17:38.031 fused_ordering(32) 00:17:38.031 fused_ordering(33) 00:17:38.031 fused_ordering(34) 00:17:38.031 fused_ordering(35) 00:17:38.031 fused_ordering(36) 00:17:38.031 fused_ordering(37) 00:17:38.031 fused_ordering(38) 00:17:38.031 fused_ordering(39) 00:17:38.031 fused_ordering(40) 00:17:38.031 fused_ordering(41) 00:17:38.031 fused_ordering(42) 00:17:38.031 fused_ordering(43) 00:17:38.031 fused_ordering(44) 00:17:38.031 fused_ordering(45) 00:17:38.031 fused_ordering(46) 00:17:38.031 fused_ordering(47) 00:17:38.031 fused_ordering(48) 00:17:38.031 fused_ordering(49) 00:17:38.031 fused_ordering(50) 00:17:38.031 fused_ordering(51) 00:17:38.031 fused_ordering(52) 00:17:38.031 fused_ordering(53) 00:17:38.031 fused_ordering(54) 00:17:38.031 fused_ordering(55) 00:17:38.031 fused_ordering(56) 00:17:38.031 fused_ordering(57) 00:17:38.031 fused_ordering(58) 00:17:38.031 fused_ordering(59) 00:17:38.031 fused_ordering(60) 00:17:38.031 fused_ordering(61) 00:17:38.031 fused_ordering(62) 00:17:38.031 fused_ordering(63) 00:17:38.031 fused_ordering(64) 00:17:38.031 fused_ordering(65) 00:17:38.031 fused_ordering(66) 00:17:38.031 fused_ordering(67) 00:17:38.031 fused_ordering(68) 00:17:38.031 fused_ordering(69) 00:17:38.031 fused_ordering(70) 00:17:38.031 fused_ordering(71) 00:17:38.031 fused_ordering(72) 00:17:38.031 fused_ordering(73) 00:17:38.031 fused_ordering(74) 00:17:38.031 fused_ordering(75) 00:17:38.031 fused_ordering(76) 00:17:38.031 fused_ordering(77) 00:17:38.031 fused_ordering(78) 00:17:38.031 fused_ordering(79) 00:17:38.031 fused_ordering(80) 00:17:38.031 fused_ordering(81) 00:17:38.031 fused_ordering(82) 00:17:38.031 fused_ordering(83) 00:17:38.031 fused_ordering(84) 00:17:38.031 fused_ordering(85) 00:17:38.031 fused_ordering(86) 00:17:38.031 fused_ordering(87) 00:17:38.031 fused_ordering(88) 00:17:38.031 fused_ordering(89) 00:17:38.031 fused_ordering(90) 00:17:38.031 fused_ordering(91) 00:17:38.031 fused_ordering(92) 00:17:38.031 fused_ordering(93) 00:17:38.031 fused_ordering(94) 00:17:38.031 fused_ordering(95) 00:17:38.031 fused_ordering(96) 00:17:38.031 fused_ordering(97) 00:17:38.031 fused_ordering(98) 00:17:38.031 fused_ordering(99) 00:17:38.031 fused_ordering(100) 00:17:38.031 fused_ordering(101) 00:17:38.031 fused_ordering(102) 00:17:38.031 fused_ordering(103) 00:17:38.031 fused_ordering(104) 00:17:38.031 fused_ordering(105) 00:17:38.031 fused_ordering(106) 00:17:38.031 fused_ordering(107) 00:17:38.031 fused_ordering(108) 00:17:38.031 fused_ordering(109) 00:17:38.031 fused_ordering(110) 00:17:38.031 fused_ordering(111) 00:17:38.031 fused_ordering(112) 00:17:38.031 fused_ordering(113) 00:17:38.031 fused_ordering(114) 00:17:38.031 fused_ordering(115) 00:17:38.031 fused_ordering(116) 00:17:38.031 fused_ordering(117) 00:17:38.031 fused_ordering(118) 00:17:38.031 fused_ordering(119) 00:17:38.031 fused_ordering(120) 00:17:38.031 fused_ordering(121) 00:17:38.031 fused_ordering(122) 00:17:38.031 fused_ordering(123) 00:17:38.031 fused_ordering(124) 00:17:38.031 fused_ordering(125) 00:17:38.031 fused_ordering(126) 00:17:38.031 fused_ordering(127) 00:17:38.031 fused_ordering(128) 00:17:38.031 fused_ordering(129) 00:17:38.031 fused_ordering(130) 00:17:38.032 fused_ordering(131) 00:17:38.032 fused_ordering(132) 00:17:38.032 fused_ordering(133) 00:17:38.032 fused_ordering(134) 00:17:38.032 fused_ordering(135) 00:17:38.032 fused_ordering(136) 00:17:38.032 fused_ordering(137) 00:17:38.032 fused_ordering(138) 00:17:38.032 fused_ordering(139) 00:17:38.032 fused_ordering(140) 00:17:38.032 fused_ordering(141) 00:17:38.032 fused_ordering(142) 00:17:38.032 fused_ordering(143) 00:17:38.032 fused_ordering(144) 00:17:38.032 fused_ordering(145) 00:17:38.032 fused_ordering(146) 00:17:38.032 fused_ordering(147) 00:17:38.032 fused_ordering(148) 00:17:38.032 fused_ordering(149) 00:17:38.032 fused_ordering(150) 00:17:38.032 fused_ordering(151) 00:17:38.032 fused_ordering(152) 00:17:38.032 fused_ordering(153) 00:17:38.032 fused_ordering(154) 00:17:38.032 fused_ordering(155) 00:17:38.032 fused_ordering(156) 00:17:38.032 fused_ordering(157) 00:17:38.032 fused_ordering(158) 00:17:38.032 fused_ordering(159) 00:17:38.032 fused_ordering(160) 00:17:38.032 fused_ordering(161) 00:17:38.032 fused_ordering(162) 00:17:38.032 fused_ordering(163) 00:17:38.032 fused_ordering(164) 00:17:38.032 fused_ordering(165) 00:17:38.032 fused_ordering(166) 00:17:38.032 fused_ordering(167) 00:17:38.032 fused_ordering(168) 00:17:38.032 fused_ordering(169) 00:17:38.032 fused_ordering(170) 00:17:38.032 fused_ordering(171) 00:17:38.032 fused_ordering(172) 00:17:38.032 fused_ordering(173) 00:17:38.032 fused_ordering(174) 00:17:38.032 fused_ordering(175) 00:17:38.032 fused_ordering(176) 00:17:38.032 fused_ordering(177) 00:17:38.032 fused_ordering(178) 00:17:38.032 fused_ordering(179) 00:17:38.032 fused_ordering(180) 00:17:38.032 fused_ordering(181) 00:17:38.032 fused_ordering(182) 00:17:38.032 fused_ordering(183) 00:17:38.032 fused_ordering(184) 00:17:38.032 fused_ordering(185) 00:17:38.032 fused_ordering(186) 00:17:38.032 fused_ordering(187) 00:17:38.032 fused_ordering(188) 00:17:38.032 fused_ordering(189) 00:17:38.032 fused_ordering(190) 00:17:38.032 fused_ordering(191) 00:17:38.032 fused_ordering(192) 00:17:38.032 fused_ordering(193) 00:17:38.032 fused_ordering(194) 00:17:38.032 fused_ordering(195) 00:17:38.032 fused_ordering(196) 00:17:38.032 fused_ordering(197) 00:17:38.032 fused_ordering(198) 00:17:38.032 fused_ordering(199) 00:17:38.032 fused_ordering(200) 00:17:38.032 fused_ordering(201) 00:17:38.032 fused_ordering(202) 00:17:38.032 fused_ordering(203) 00:17:38.032 fused_ordering(204) 00:17:38.032 fused_ordering(205) 00:17:38.292 fused_ordering(206) 00:17:38.292 fused_ordering(207) 00:17:38.292 fused_ordering(208) 00:17:38.292 fused_ordering(209) 00:17:38.292 fused_ordering(210) 00:17:38.292 fused_ordering(211) 00:17:38.292 fused_ordering(212) 00:17:38.292 fused_ordering(213) 00:17:38.292 fused_ordering(214) 00:17:38.292 fused_ordering(215) 00:17:38.292 fused_ordering(216) 00:17:38.292 fused_ordering(217) 00:17:38.292 fused_ordering(218) 00:17:38.292 fused_ordering(219) 00:17:38.292 fused_ordering(220) 00:17:38.292 fused_ordering(221) 00:17:38.292 fused_ordering(222) 00:17:38.292 fused_ordering(223) 00:17:38.292 fused_ordering(224) 00:17:38.292 fused_ordering(225) 00:17:38.292 fused_ordering(226) 00:17:38.292 fused_ordering(227) 00:17:38.292 fused_ordering(228) 00:17:38.292 fused_ordering(229) 00:17:38.292 fused_ordering(230) 00:17:38.292 fused_ordering(231) 00:17:38.292 fused_ordering(232) 00:17:38.292 fused_ordering(233) 00:17:38.292 fused_ordering(234) 00:17:38.292 fused_ordering(235) 00:17:38.292 fused_ordering(236) 00:17:38.292 fused_ordering(237) 00:17:38.292 fused_ordering(238) 00:17:38.292 fused_ordering(239) 00:17:38.292 fused_ordering(240) 00:17:38.292 fused_ordering(241) 00:17:38.292 fused_ordering(242) 00:17:38.292 fused_ordering(243) 00:17:38.292 fused_ordering(244) 00:17:38.292 fused_ordering(245) 00:17:38.292 fused_ordering(246) 00:17:38.292 fused_ordering(247) 00:17:38.292 fused_ordering(248) 00:17:38.292 fused_ordering(249) 00:17:38.292 fused_ordering(250) 00:17:38.292 fused_ordering(251) 00:17:38.292 fused_ordering(252) 00:17:38.292 fused_ordering(253) 00:17:38.292 fused_ordering(254) 00:17:38.292 fused_ordering(255) 00:17:38.292 fused_ordering(256) 00:17:38.292 fused_ordering(257) 00:17:38.292 fused_ordering(258) 00:17:38.292 fused_ordering(259) 00:17:38.292 fused_ordering(260) 00:17:38.292 fused_ordering(261) 00:17:38.292 fused_ordering(262) 00:17:38.292 fused_ordering(263) 00:17:38.292 fused_ordering(264) 00:17:38.292 fused_ordering(265) 00:17:38.292 fused_ordering(266) 00:17:38.292 fused_ordering(267) 00:17:38.292 fused_ordering(268) 00:17:38.292 fused_ordering(269) 00:17:38.292 fused_ordering(270) 00:17:38.292 fused_ordering(271) 00:17:38.292 fused_ordering(272) 00:17:38.292 fused_ordering(273) 00:17:38.292 fused_ordering(274) 00:17:38.292 fused_ordering(275) 00:17:38.292 fused_ordering(276) 00:17:38.292 fused_ordering(277) 00:17:38.292 fused_ordering(278) 00:17:38.292 fused_ordering(279) 00:17:38.292 fused_ordering(280) 00:17:38.292 fused_ordering(281) 00:17:38.292 fused_ordering(282) 00:17:38.292 fused_ordering(283) 00:17:38.292 fused_ordering(284) 00:17:38.292 fused_ordering(285) 00:17:38.292 fused_ordering(286) 00:17:38.292 fused_ordering(287) 00:17:38.292 fused_ordering(288) 00:17:38.292 fused_ordering(289) 00:17:38.292 fused_ordering(290) 00:17:38.292 fused_ordering(291) 00:17:38.292 fused_ordering(292) 00:17:38.292 fused_ordering(293) 00:17:38.292 fused_ordering(294) 00:17:38.292 fused_ordering(295) 00:17:38.292 fused_ordering(296) 00:17:38.292 fused_ordering(297) 00:17:38.292 fused_ordering(298) 00:17:38.292 fused_ordering(299) 00:17:38.292 fused_ordering(300) 00:17:38.292 fused_ordering(301) 00:17:38.292 fused_ordering(302) 00:17:38.292 fused_ordering(303) 00:17:38.292 fused_ordering(304) 00:17:38.292 fused_ordering(305) 00:17:38.292 fused_ordering(306) 00:17:38.292 fused_ordering(307) 00:17:38.292 fused_ordering(308) 00:17:38.292 fused_ordering(309) 00:17:38.292 fused_ordering(310) 00:17:38.292 fused_ordering(311) 00:17:38.292 fused_ordering(312) 00:17:38.292 fused_ordering(313) 00:17:38.292 fused_ordering(314) 00:17:38.292 fused_ordering(315) 00:17:38.292 fused_ordering(316) 00:17:38.292 fused_ordering(317) 00:17:38.292 fused_ordering(318) 00:17:38.292 fused_ordering(319) 00:17:38.292 fused_ordering(320) 00:17:38.292 fused_ordering(321) 00:17:38.292 fused_ordering(322) 00:17:38.292 fused_ordering(323) 00:17:38.292 fused_ordering(324) 00:17:38.292 fused_ordering(325) 00:17:38.292 fused_ordering(326) 00:17:38.292 fused_ordering(327) 00:17:38.292 fused_ordering(328) 00:17:38.292 fused_ordering(329) 00:17:38.292 fused_ordering(330) 00:17:38.292 fused_ordering(331) 00:17:38.292 fused_ordering(332) 00:17:38.292 fused_ordering(333) 00:17:38.292 fused_ordering(334) 00:17:38.293 fused_ordering(335) 00:17:38.293 fused_ordering(336) 00:17:38.293 fused_ordering(337) 00:17:38.293 fused_ordering(338) 00:17:38.293 fused_ordering(339) 00:17:38.293 fused_ordering(340) 00:17:38.293 fused_ordering(341) 00:17:38.293 fused_ordering(342) 00:17:38.293 fused_ordering(343) 00:17:38.293 fused_ordering(344) 00:17:38.293 fused_ordering(345) 00:17:38.293 fused_ordering(346) 00:17:38.293 fused_ordering(347) 00:17:38.293 fused_ordering(348) 00:17:38.293 fused_ordering(349) 00:17:38.293 fused_ordering(350) 00:17:38.293 fused_ordering(351) 00:17:38.293 fused_ordering(352) 00:17:38.293 fused_ordering(353) 00:17:38.293 fused_ordering(354) 00:17:38.293 fused_ordering(355) 00:17:38.293 fused_ordering(356) 00:17:38.293 fused_ordering(357) 00:17:38.293 fused_ordering(358) 00:17:38.293 fused_ordering(359) 00:17:38.293 fused_ordering(360) 00:17:38.293 fused_ordering(361) 00:17:38.293 fused_ordering(362) 00:17:38.293 fused_ordering(363) 00:17:38.293 fused_ordering(364) 00:17:38.293 fused_ordering(365) 00:17:38.293 fused_ordering(366) 00:17:38.293 fused_ordering(367) 00:17:38.293 fused_ordering(368) 00:17:38.293 fused_ordering(369) 00:17:38.293 fused_ordering(370) 00:17:38.293 fused_ordering(371) 00:17:38.293 fused_ordering(372) 00:17:38.293 fused_ordering(373) 00:17:38.293 fused_ordering(374) 00:17:38.293 fused_ordering(375) 00:17:38.293 fused_ordering(376) 00:17:38.293 fused_ordering(377) 00:17:38.293 fused_ordering(378) 00:17:38.293 fused_ordering(379) 00:17:38.293 fused_ordering(380) 00:17:38.293 fused_ordering(381) 00:17:38.293 fused_ordering(382) 00:17:38.293 fused_ordering(383) 00:17:38.293 fused_ordering(384) 00:17:38.293 fused_ordering(385) 00:17:38.293 fused_ordering(386) 00:17:38.293 fused_ordering(387) 00:17:38.293 fused_ordering(388) 00:17:38.293 fused_ordering(389) 00:17:38.293 fused_ordering(390) 00:17:38.293 fused_ordering(391) 00:17:38.293 fused_ordering(392) 00:17:38.293 fused_ordering(393) 00:17:38.293 fused_ordering(394) 00:17:38.293 fused_ordering(395) 00:17:38.293 fused_ordering(396) 00:17:38.293 fused_ordering(397) 00:17:38.293 fused_ordering(398) 00:17:38.293 fused_ordering(399) 00:17:38.293 fused_ordering(400) 00:17:38.293 fused_ordering(401) 00:17:38.293 fused_ordering(402) 00:17:38.293 fused_ordering(403) 00:17:38.293 fused_ordering(404) 00:17:38.293 fused_ordering(405) 00:17:38.293 fused_ordering(406) 00:17:38.293 fused_ordering(407) 00:17:38.293 fused_ordering(408) 00:17:38.293 fused_ordering(409) 00:17:38.293 fused_ordering(410) 00:17:38.863 fused_ordering(411) 00:17:38.863 fused_ordering(412) 00:17:38.863 fused_ordering(413) 00:17:38.863 fused_ordering(414) 00:17:38.863 fused_ordering(415) 00:17:38.863 fused_ordering(416) 00:17:38.863 fused_ordering(417) 00:17:38.863 fused_ordering(418) 00:17:38.863 fused_ordering(419) 00:17:38.863 fused_ordering(420) 00:17:38.863 fused_ordering(421) 00:17:38.863 fused_ordering(422) 00:17:38.863 fused_ordering(423) 00:17:38.863 fused_ordering(424) 00:17:38.863 fused_ordering(425) 00:17:38.863 fused_ordering(426) 00:17:38.863 fused_ordering(427) 00:17:38.864 fused_ordering(428) 00:17:38.864 fused_ordering(429) 00:17:38.864 fused_ordering(430) 00:17:38.864 fused_ordering(431) 00:17:38.864 fused_ordering(432) 00:17:38.864 fused_ordering(433) 00:17:38.864 fused_ordering(434) 00:17:38.864 fused_ordering(435) 00:17:38.864 fused_ordering(436) 00:17:38.864 fused_ordering(437) 00:17:38.864 fused_ordering(438) 00:17:38.864 fused_ordering(439) 00:17:38.864 fused_ordering(440) 00:17:38.864 fused_ordering(441) 00:17:38.864 fused_ordering(442) 00:17:38.864 fused_ordering(443) 00:17:38.864 fused_ordering(444) 00:17:38.864 fused_ordering(445) 00:17:38.864 fused_ordering(446) 00:17:38.864 fused_ordering(447) 00:17:38.864 fused_ordering(448) 00:17:38.864 fused_ordering(449) 00:17:38.864 fused_ordering(450) 00:17:38.864 fused_ordering(451) 00:17:38.864 fused_ordering(452) 00:17:38.864 fused_ordering(453) 00:17:38.864 fused_ordering(454) 00:17:38.864 fused_ordering(455) 00:17:38.864 fused_ordering(456) 00:17:38.864 fused_ordering(457) 00:17:38.864 fused_ordering(458) 00:17:38.864 fused_ordering(459) 00:17:38.864 fused_ordering(460) 00:17:38.864 fused_ordering(461) 00:17:38.864 fused_ordering(462) 00:17:38.864 fused_ordering(463) 00:17:38.864 fused_ordering(464) 00:17:38.864 fused_ordering(465) 00:17:38.864 fused_ordering(466) 00:17:38.864 fused_ordering(467) 00:17:38.864 fused_ordering(468) 00:17:38.864 fused_ordering(469) 00:17:38.864 fused_ordering(470) 00:17:38.864 fused_ordering(471) 00:17:38.864 fused_ordering(472) 00:17:38.864 fused_ordering(473) 00:17:38.864 fused_ordering(474) 00:17:38.864 fused_ordering(475) 00:17:38.864 fused_ordering(476) 00:17:38.864 fused_ordering(477) 00:17:38.864 fused_ordering(478) 00:17:38.864 fused_ordering(479) 00:17:38.864 fused_ordering(480) 00:17:38.864 fused_ordering(481) 00:17:38.864 fused_ordering(482) 00:17:38.864 fused_ordering(483) 00:17:38.864 fused_ordering(484) 00:17:38.864 fused_ordering(485) 00:17:38.864 fused_ordering(486) 00:17:38.864 fused_ordering(487) 00:17:38.864 fused_ordering(488) 00:17:38.864 fused_ordering(489) 00:17:38.864 fused_ordering(490) 00:17:38.864 fused_ordering(491) 00:17:38.864 fused_ordering(492) 00:17:38.864 fused_ordering(493) 00:17:38.864 fused_ordering(494) 00:17:38.864 fused_ordering(495) 00:17:38.864 fused_ordering(496) 00:17:38.864 fused_ordering(497) 00:17:38.864 fused_ordering(498) 00:17:38.864 fused_ordering(499) 00:17:38.864 fused_ordering(500) 00:17:38.864 fused_ordering(501) 00:17:38.864 fused_ordering(502) 00:17:38.864 fused_ordering(503) 00:17:38.864 fused_ordering(504) 00:17:38.864 fused_ordering(505) 00:17:38.864 fused_ordering(506) 00:17:38.864 fused_ordering(507) 00:17:38.864 fused_ordering(508) 00:17:38.864 fused_ordering(509) 00:17:38.864 fused_ordering(510) 00:17:38.864 fused_ordering(511) 00:17:38.864 fused_ordering(512) 00:17:38.864 fused_ordering(513) 00:17:38.864 fused_ordering(514) 00:17:38.864 fused_ordering(515) 00:17:38.864 fused_ordering(516) 00:17:38.864 fused_ordering(517) 00:17:38.864 fused_ordering(518) 00:17:38.864 fused_ordering(519) 00:17:38.864 fused_ordering(520) 00:17:38.864 fused_ordering(521) 00:17:38.864 fused_ordering(522) 00:17:38.864 fused_ordering(523) 00:17:38.864 fused_ordering(524) 00:17:38.864 fused_ordering(525) 00:17:38.864 fused_ordering(526) 00:17:38.864 fused_ordering(527) 00:17:38.864 fused_ordering(528) 00:17:38.864 fused_ordering(529) 00:17:38.864 fused_ordering(530) 00:17:38.864 fused_ordering(531) 00:17:38.864 fused_ordering(532) 00:17:38.864 fused_ordering(533) 00:17:38.864 fused_ordering(534) 00:17:38.864 fused_ordering(535) 00:17:38.864 fused_ordering(536) 00:17:38.864 fused_ordering(537) 00:17:38.864 fused_ordering(538) 00:17:38.864 fused_ordering(539) 00:17:38.864 fused_ordering(540) 00:17:38.864 fused_ordering(541) 00:17:38.864 fused_ordering(542) 00:17:38.864 fused_ordering(543) 00:17:38.864 fused_ordering(544) 00:17:38.864 fused_ordering(545) 00:17:38.864 fused_ordering(546) 00:17:38.864 fused_ordering(547) 00:17:38.864 fused_ordering(548) 00:17:38.864 fused_ordering(549) 00:17:38.864 fused_ordering(550) 00:17:38.864 fused_ordering(551) 00:17:38.864 fused_ordering(552) 00:17:38.864 fused_ordering(553) 00:17:38.864 fused_ordering(554) 00:17:38.864 fused_ordering(555) 00:17:38.864 fused_ordering(556) 00:17:38.864 fused_ordering(557) 00:17:38.864 fused_ordering(558) 00:17:38.864 fused_ordering(559) 00:17:38.864 fused_ordering(560) 00:17:38.864 fused_ordering(561) 00:17:38.864 fused_ordering(562) 00:17:38.864 fused_ordering(563) 00:17:38.864 fused_ordering(564) 00:17:38.864 fused_ordering(565) 00:17:38.864 fused_ordering(566) 00:17:38.864 fused_ordering(567) 00:17:38.864 fused_ordering(568) 00:17:38.864 fused_ordering(569) 00:17:38.864 fused_ordering(570) 00:17:38.864 fused_ordering(571) 00:17:38.864 fused_ordering(572) 00:17:38.864 fused_ordering(573) 00:17:38.864 fused_ordering(574) 00:17:38.864 fused_ordering(575) 00:17:38.864 fused_ordering(576) 00:17:38.864 fused_ordering(577) 00:17:38.864 fused_ordering(578) 00:17:38.864 fused_ordering(579) 00:17:38.864 fused_ordering(580) 00:17:38.864 fused_ordering(581) 00:17:38.864 fused_ordering(582) 00:17:38.864 fused_ordering(583) 00:17:38.864 fused_ordering(584) 00:17:38.864 fused_ordering(585) 00:17:38.864 fused_ordering(586) 00:17:38.864 fused_ordering(587) 00:17:38.864 fused_ordering(588) 00:17:38.864 fused_ordering(589) 00:17:38.864 fused_ordering(590) 00:17:38.864 fused_ordering(591) 00:17:38.864 fused_ordering(592) 00:17:38.864 fused_ordering(593) 00:17:38.864 fused_ordering(594) 00:17:38.864 fused_ordering(595) 00:17:38.864 fused_ordering(596) 00:17:38.864 fused_ordering(597) 00:17:38.864 fused_ordering(598) 00:17:38.864 fused_ordering(599) 00:17:38.864 fused_ordering(600) 00:17:38.864 fused_ordering(601) 00:17:38.864 fused_ordering(602) 00:17:38.864 fused_ordering(603) 00:17:38.864 fused_ordering(604) 00:17:38.864 fused_ordering(605) 00:17:38.864 fused_ordering(606) 00:17:38.864 fused_ordering(607) 00:17:38.864 fused_ordering(608) 00:17:38.864 fused_ordering(609) 00:17:38.864 fused_ordering(610) 00:17:38.864 fused_ordering(611) 00:17:38.864 fused_ordering(612) 00:17:38.864 fused_ordering(613) 00:17:38.864 fused_ordering(614) 00:17:38.864 fused_ordering(615) 00:17:39.435 fused_ordering(616) 00:17:39.435 fused_ordering(617) 00:17:39.435 fused_ordering(618) 00:17:39.435 fused_ordering(619) 00:17:39.435 fused_ordering(620) 00:17:39.435 fused_ordering(621) 00:17:39.435 fused_ordering(622) 00:17:39.435 fused_ordering(623) 00:17:39.435 fused_ordering(624) 00:17:39.435 fused_ordering(625) 00:17:39.435 fused_ordering(626) 00:17:39.435 fused_ordering(627) 00:17:39.435 fused_ordering(628) 00:17:39.435 fused_ordering(629) 00:17:39.435 fused_ordering(630) 00:17:39.435 fused_ordering(631) 00:17:39.435 fused_ordering(632) 00:17:39.435 fused_ordering(633) 00:17:39.435 fused_ordering(634) 00:17:39.435 fused_ordering(635) 00:17:39.435 fused_ordering(636) 00:17:39.435 fused_ordering(637) 00:17:39.435 fused_ordering(638) 00:17:39.435 fused_ordering(639) 00:17:39.435 fused_ordering(640) 00:17:39.435 fused_ordering(641) 00:17:39.435 fused_ordering(642) 00:17:39.435 fused_ordering(643) 00:17:39.435 fused_ordering(644) 00:17:39.435 fused_ordering(645) 00:17:39.435 fused_ordering(646) 00:17:39.435 fused_ordering(647) 00:17:39.435 fused_ordering(648) 00:17:39.435 fused_ordering(649) 00:17:39.435 fused_ordering(650) 00:17:39.435 fused_ordering(651) 00:17:39.435 fused_ordering(652) 00:17:39.435 fused_ordering(653) 00:17:39.435 fused_ordering(654) 00:17:39.435 fused_ordering(655) 00:17:39.435 fused_ordering(656) 00:17:39.435 fused_ordering(657) 00:17:39.435 fused_ordering(658) 00:17:39.435 fused_ordering(659) 00:17:39.435 fused_ordering(660) 00:17:39.435 fused_ordering(661) 00:17:39.435 fused_ordering(662) 00:17:39.435 fused_ordering(663) 00:17:39.435 fused_ordering(664) 00:17:39.435 fused_ordering(665) 00:17:39.435 fused_ordering(666) 00:17:39.435 fused_ordering(667) 00:17:39.435 fused_ordering(668) 00:17:39.435 fused_ordering(669) 00:17:39.435 fused_ordering(670) 00:17:39.435 fused_ordering(671) 00:17:39.435 fused_ordering(672) 00:17:39.435 fused_ordering(673) 00:17:39.435 fused_ordering(674) 00:17:39.435 fused_ordering(675) 00:17:39.435 fused_ordering(676) 00:17:39.435 fused_ordering(677) 00:17:39.435 fused_ordering(678) 00:17:39.435 fused_ordering(679) 00:17:39.435 fused_ordering(680) 00:17:39.435 fused_ordering(681) 00:17:39.435 fused_ordering(682) 00:17:39.435 fused_ordering(683) 00:17:39.435 fused_ordering(684) 00:17:39.435 fused_ordering(685) 00:17:39.435 fused_ordering(686) 00:17:39.435 fused_ordering(687) 00:17:39.435 fused_ordering(688) 00:17:39.435 fused_ordering(689) 00:17:39.435 fused_ordering(690) 00:17:39.435 fused_ordering(691) 00:17:39.435 fused_ordering(692) 00:17:39.435 fused_ordering(693) 00:17:39.435 fused_ordering(694) 00:17:39.435 fused_ordering(695) 00:17:39.435 fused_ordering(696) 00:17:39.435 fused_ordering(697) 00:17:39.435 fused_ordering(698) 00:17:39.435 fused_ordering(699) 00:17:39.435 fused_ordering(700) 00:17:39.435 fused_ordering(701) 00:17:39.435 fused_ordering(702) 00:17:39.435 fused_ordering(703) 00:17:39.435 fused_ordering(704) 00:17:39.435 fused_ordering(705) 00:17:39.435 fused_ordering(706) 00:17:39.435 fused_ordering(707) 00:17:39.435 fused_ordering(708) 00:17:39.435 fused_ordering(709) 00:17:39.435 fused_ordering(710) 00:17:39.435 fused_ordering(711) 00:17:39.435 fused_ordering(712) 00:17:39.435 fused_ordering(713) 00:17:39.435 fused_ordering(714) 00:17:39.435 fused_ordering(715) 00:17:39.435 fused_ordering(716) 00:17:39.435 fused_ordering(717) 00:17:39.435 fused_ordering(718) 00:17:39.435 fused_ordering(719) 00:17:39.435 fused_ordering(720) 00:17:39.435 fused_ordering(721) 00:17:39.435 fused_ordering(722) 00:17:39.435 fused_ordering(723) 00:17:39.435 fused_ordering(724) 00:17:39.435 fused_ordering(725) 00:17:39.435 fused_ordering(726) 00:17:39.435 fused_ordering(727) 00:17:39.435 fused_ordering(728) 00:17:39.435 fused_ordering(729) 00:17:39.435 fused_ordering(730) 00:17:39.435 fused_ordering(731) 00:17:39.435 fused_ordering(732) 00:17:39.435 fused_ordering(733) 00:17:39.435 fused_ordering(734) 00:17:39.435 fused_ordering(735) 00:17:39.435 fused_ordering(736) 00:17:39.435 fused_ordering(737) 00:17:39.435 fused_ordering(738) 00:17:39.435 fused_ordering(739) 00:17:39.435 fused_ordering(740) 00:17:39.435 fused_ordering(741) 00:17:39.435 fused_ordering(742) 00:17:39.435 fused_ordering(743) 00:17:39.435 fused_ordering(744) 00:17:39.435 fused_ordering(745) 00:17:39.435 fused_ordering(746) 00:17:39.435 fused_ordering(747) 00:17:39.435 fused_ordering(748) 00:17:39.435 fused_ordering(749) 00:17:39.435 fused_ordering(750) 00:17:39.435 fused_ordering(751) 00:17:39.435 fused_ordering(752) 00:17:39.435 fused_ordering(753) 00:17:39.435 fused_ordering(754) 00:17:39.435 fused_ordering(755) 00:17:39.435 fused_ordering(756) 00:17:39.435 fused_ordering(757) 00:17:39.435 fused_ordering(758) 00:17:39.435 fused_ordering(759) 00:17:39.435 fused_ordering(760) 00:17:39.435 fused_ordering(761) 00:17:39.435 fused_ordering(762) 00:17:39.435 fused_ordering(763) 00:17:39.435 fused_ordering(764) 00:17:39.435 fused_ordering(765) 00:17:39.435 fused_ordering(766) 00:17:39.435 fused_ordering(767) 00:17:39.435 fused_ordering(768) 00:17:39.435 fused_ordering(769) 00:17:39.435 fused_ordering(770) 00:17:39.435 fused_ordering(771) 00:17:39.435 fused_ordering(772) 00:17:39.435 fused_ordering(773) 00:17:39.435 fused_ordering(774) 00:17:39.435 fused_ordering(775) 00:17:39.435 fused_ordering(776) 00:17:39.435 fused_ordering(777) 00:17:39.435 fused_ordering(778) 00:17:39.435 fused_ordering(779) 00:17:39.435 fused_ordering(780) 00:17:39.435 fused_ordering(781) 00:17:39.435 fused_ordering(782) 00:17:39.435 fused_ordering(783) 00:17:39.435 fused_ordering(784) 00:17:39.435 fused_ordering(785) 00:17:39.435 fused_ordering(786) 00:17:39.435 fused_ordering(787) 00:17:39.435 fused_ordering(788) 00:17:39.435 fused_ordering(789) 00:17:39.435 fused_ordering(790) 00:17:39.435 fused_ordering(791) 00:17:39.435 fused_ordering(792) 00:17:39.435 fused_ordering(793) 00:17:39.435 fused_ordering(794) 00:17:39.435 fused_ordering(795) 00:17:39.435 fused_ordering(796) 00:17:39.435 fused_ordering(797) 00:17:39.435 fused_ordering(798) 00:17:39.435 fused_ordering(799) 00:17:39.435 fused_ordering(800) 00:17:39.435 fused_ordering(801) 00:17:39.435 fused_ordering(802) 00:17:39.435 fused_ordering(803) 00:17:39.435 fused_ordering(804) 00:17:39.435 fused_ordering(805) 00:17:39.435 fused_ordering(806) 00:17:39.435 fused_ordering(807) 00:17:39.435 fused_ordering(808) 00:17:39.435 fused_ordering(809) 00:17:39.435 fused_ordering(810) 00:17:39.435 fused_ordering(811) 00:17:39.435 fused_ordering(812) 00:17:39.435 fused_ordering(813) 00:17:39.435 fused_ordering(814) 00:17:39.435 fused_ordering(815) 00:17:39.435 fused_ordering(816) 00:17:39.435 fused_ordering(817) 00:17:39.435 fused_ordering(818) 00:17:39.435 fused_ordering(819) 00:17:39.435 fused_ordering(820) 00:17:40.005 fused_ordering(821) 00:17:40.006 fused_ordering(822) 00:17:40.006 fused_ordering(823) 00:17:40.006 fused_ordering(824) 00:17:40.006 fused_ordering(825) 00:17:40.006 fused_ordering(826) 00:17:40.006 fused_ordering(827) 00:17:40.006 fused_ordering(828) 00:17:40.006 fused_ordering(829) 00:17:40.006 fused_ordering(830) 00:17:40.006 fused_ordering(831) 00:17:40.006 fused_ordering(832) 00:17:40.006 fused_ordering(833) 00:17:40.006 fused_ordering(834) 00:17:40.006 fused_ordering(835) 00:17:40.006 fused_ordering(836) 00:17:40.006 fused_ordering(837) 00:17:40.006 fused_ordering(838) 00:17:40.006 fused_ordering(839) 00:17:40.006 fused_ordering(840) 00:17:40.006 fused_ordering(841) 00:17:40.006 fused_ordering(842) 00:17:40.006 fused_ordering(843) 00:17:40.006 fused_ordering(844) 00:17:40.006 fused_ordering(845) 00:17:40.006 fused_ordering(846) 00:17:40.006 fused_ordering(847) 00:17:40.006 fused_ordering(848) 00:17:40.006 fused_ordering(849) 00:17:40.006 fused_ordering(850) 00:17:40.006 fused_ordering(851) 00:17:40.006 fused_ordering(852) 00:17:40.006 fused_ordering(853) 00:17:40.006 fused_ordering(854) 00:17:40.006 fused_ordering(855) 00:17:40.006 fused_ordering(856) 00:17:40.006 fused_ordering(857) 00:17:40.006 fused_ordering(858) 00:17:40.006 fused_ordering(859) 00:17:40.006 fused_ordering(860) 00:17:40.006 fused_ordering(861) 00:17:40.006 fused_ordering(862) 00:17:40.006 fused_ordering(863) 00:17:40.006 fused_ordering(864) 00:17:40.006 fused_ordering(865) 00:17:40.006 fused_ordering(866) 00:17:40.006 fused_ordering(867) 00:17:40.006 fused_ordering(868) 00:17:40.006 fused_ordering(869) 00:17:40.006 fused_ordering(870) 00:17:40.006 fused_ordering(871) 00:17:40.006 fused_ordering(872) 00:17:40.006 fused_ordering(873) 00:17:40.006 fused_ordering(874) 00:17:40.006 fused_ordering(875) 00:17:40.006 fused_ordering(876) 00:17:40.006 fused_ordering(877) 00:17:40.006 fused_ordering(878) 00:17:40.006 fused_ordering(879) 00:17:40.006 fused_ordering(880) 00:17:40.006 fused_ordering(881) 00:17:40.006 fused_ordering(882) 00:17:40.006 fused_ordering(883) 00:17:40.006 fused_ordering(884) 00:17:40.006 fused_ordering(885) 00:17:40.006 fused_ordering(886) 00:17:40.006 fused_ordering(887) 00:17:40.006 fused_ordering(888) 00:17:40.006 fused_ordering(889) 00:17:40.006 fused_ordering(890) 00:17:40.006 fused_ordering(891) 00:17:40.006 fused_ordering(892) 00:17:40.006 fused_ordering(893) 00:17:40.006 fused_ordering(894) 00:17:40.006 fused_ordering(895) 00:17:40.006 fused_ordering(896) 00:17:40.006 fused_ordering(897) 00:17:40.006 fused_ordering(898) 00:17:40.006 fused_ordering(899) 00:17:40.006 fused_ordering(900) 00:17:40.006 fused_ordering(901) 00:17:40.006 fused_ordering(902) 00:17:40.006 fused_ordering(903) 00:17:40.006 fused_ordering(904) 00:17:40.006 fused_ordering(905) 00:17:40.006 fused_ordering(906) 00:17:40.006 fused_ordering(907) 00:17:40.006 fused_ordering(908) 00:17:40.006 fused_ordering(909) 00:17:40.006 fused_ordering(910) 00:17:40.006 fused_ordering(911) 00:17:40.006 fused_ordering(912) 00:17:40.006 fused_ordering(913) 00:17:40.006 fused_ordering(914) 00:17:40.006 fused_ordering(915) 00:17:40.006 fused_ordering(916) 00:17:40.006 fused_ordering(917) 00:17:40.006 fused_ordering(918) 00:17:40.006 fused_ordering(919) 00:17:40.006 fused_ordering(920) 00:17:40.006 fused_ordering(921) 00:17:40.006 fused_ordering(922) 00:17:40.006 fused_ordering(923) 00:17:40.006 fused_ordering(924) 00:17:40.006 fused_ordering(925) 00:17:40.006 fused_ordering(926) 00:17:40.006 fused_ordering(927) 00:17:40.006 fused_ordering(928) 00:17:40.006 fused_ordering(929) 00:17:40.006 fused_ordering(930) 00:17:40.006 fused_ordering(931) 00:17:40.006 fused_ordering(932) 00:17:40.006 fused_ordering(933) 00:17:40.006 fused_ordering(934) 00:17:40.006 fused_ordering(935) 00:17:40.006 fused_ordering(936) 00:17:40.006 fused_ordering(937) 00:17:40.006 fused_ordering(938) 00:17:40.006 fused_ordering(939) 00:17:40.006 fused_ordering(940) 00:17:40.006 fused_ordering(941) 00:17:40.006 fused_ordering(942) 00:17:40.006 fused_ordering(943) 00:17:40.006 fused_ordering(944) 00:17:40.006 fused_ordering(945) 00:17:40.006 fused_ordering(946) 00:17:40.006 fused_ordering(947) 00:17:40.006 fused_ordering(948) 00:17:40.006 fused_ordering(949) 00:17:40.006 fused_ordering(950) 00:17:40.006 fused_ordering(951) 00:17:40.006 fused_ordering(952) 00:17:40.006 fused_ordering(953) 00:17:40.006 fused_ordering(954) 00:17:40.006 fused_ordering(955) 00:17:40.006 fused_ordering(956) 00:17:40.006 fused_ordering(957) 00:17:40.006 fused_ordering(958) 00:17:40.006 fused_ordering(959) 00:17:40.006 fused_ordering(960) 00:17:40.006 fused_ordering(961) 00:17:40.006 fused_ordering(962) 00:17:40.006 fused_ordering(963) 00:17:40.006 fused_ordering(964) 00:17:40.006 fused_ordering(965) 00:17:40.006 fused_ordering(966) 00:17:40.006 fused_ordering(967) 00:17:40.006 fused_ordering(968) 00:17:40.006 fused_ordering(969) 00:17:40.006 fused_ordering(970) 00:17:40.006 fused_ordering(971) 00:17:40.006 fused_ordering(972) 00:17:40.006 fused_ordering(973) 00:17:40.006 fused_ordering(974) 00:17:40.006 fused_ordering(975) 00:17:40.006 fused_ordering(976) 00:17:40.006 fused_ordering(977) 00:17:40.006 fused_ordering(978) 00:17:40.006 fused_ordering(979) 00:17:40.006 fused_ordering(980) 00:17:40.006 fused_ordering(981) 00:17:40.006 fused_ordering(982) 00:17:40.006 fused_ordering(983) 00:17:40.006 fused_ordering(984) 00:17:40.006 fused_ordering(985) 00:17:40.006 fused_ordering(986) 00:17:40.006 fused_ordering(987) 00:17:40.006 fused_ordering(988) 00:17:40.006 fused_ordering(989) 00:17:40.006 fused_ordering(990) 00:17:40.006 fused_ordering(991) 00:17:40.006 fused_ordering(992) 00:17:40.006 fused_ordering(993) 00:17:40.006 fused_ordering(994) 00:17:40.006 fused_ordering(995) 00:17:40.006 fused_ordering(996) 00:17:40.006 fused_ordering(997) 00:17:40.006 fused_ordering(998) 00:17:40.006 fused_ordering(999) 00:17:40.006 fused_ordering(1000) 00:17:40.006 fused_ordering(1001) 00:17:40.006 fused_ordering(1002) 00:17:40.006 fused_ordering(1003) 00:17:40.006 fused_ordering(1004) 00:17:40.006 fused_ordering(1005) 00:17:40.006 fused_ordering(1006) 00:17:40.006 fused_ordering(1007) 00:17:40.006 fused_ordering(1008) 00:17:40.006 fused_ordering(1009) 00:17:40.006 fused_ordering(1010) 00:17:40.006 fused_ordering(1011) 00:17:40.006 fused_ordering(1012) 00:17:40.006 fused_ordering(1013) 00:17:40.006 fused_ordering(1014) 00:17:40.006 fused_ordering(1015) 00:17:40.006 fused_ordering(1016) 00:17:40.006 fused_ordering(1017) 00:17:40.006 fused_ordering(1018) 00:17:40.006 fused_ordering(1019) 00:17:40.006 fused_ordering(1020) 00:17:40.006 fused_ordering(1021) 00:17:40.006 fused_ordering(1022) 00:17:40.006 fused_ordering(1023) 00:17:40.006 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:40.006 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:40.006 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:40.006 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:40.006 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.006 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:40.006 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.006 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.006 rmmod nvme_tcp 00:17:40.006 rmmod nvme_fabrics 00:17:40.006 rmmod nvme_keyring 00:17:40.006 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.006 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3826618 ']' 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3826618 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3826618 ']' 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3826618 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3826618 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3826618' 00:17:40.007 killing process with pid 3826618 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3826618 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3826618 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:40.007 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:40.267 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:40.267 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:40.267 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:40.267 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:40.267 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:40.267 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.267 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.267 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.180 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:42.180 00:17:42.180 real 0m14.368s 00:17:42.180 user 0m7.466s 00:17:42.180 sys 0m7.684s 00:17:42.180 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:42.180 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:42.180 ************************************ 00:17:42.180 END TEST nvmf_fused_ordering 00:17:42.180 ************************************ 00:17:42.180 10:09:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:42.180 10:09:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:42.180 10:09:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:42.180 10:09:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:42.180 ************************************ 00:17:42.180 START TEST nvmf_ns_masking 00:17:42.180 ************************************ 00:17:42.180 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:42.442 * Looking for test storage... 00:17:42.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:42.442 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:42.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.443 --rc genhtml_branch_coverage=1 00:17:42.443 --rc genhtml_function_coverage=1 00:17:42.443 --rc genhtml_legend=1 00:17:42.443 --rc geninfo_all_blocks=1 00:17:42.443 --rc geninfo_unexecuted_blocks=1 00:17:42.443 00:17:42.443 ' 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:42.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.443 --rc genhtml_branch_coverage=1 00:17:42.443 --rc genhtml_function_coverage=1 00:17:42.443 --rc genhtml_legend=1 00:17:42.443 --rc geninfo_all_blocks=1 00:17:42.443 --rc geninfo_unexecuted_blocks=1 00:17:42.443 00:17:42.443 ' 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:42.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.443 --rc genhtml_branch_coverage=1 00:17:42.443 --rc genhtml_function_coverage=1 00:17:42.443 --rc genhtml_legend=1 00:17:42.443 --rc geninfo_all_blocks=1 00:17:42.443 --rc geninfo_unexecuted_blocks=1 00:17:42.443 00:17:42.443 ' 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:42.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.443 --rc genhtml_branch_coverage=1 00:17:42.443 --rc genhtml_function_coverage=1 00:17:42.443 --rc genhtml_legend=1 00:17:42.443 --rc geninfo_all_blocks=1 00:17:42.443 --rc geninfo_unexecuted_blocks=1 00:17:42.443 00:17:42.443 ' 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e6d8f049-7b22-4e6c-b01c-34fde6fc50f4 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b8b9ae30-917b-4cca-b8bc-e8e73d0320a2 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6daff7ad-8b93-429c-b07d-fe34c62daac6 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:42.443 10:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:50.695 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:50.695 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:50.695 Found net devices under 0000:31:00.0: cvl_0_0 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:50.695 Found net devices under 0000:31:00.1: cvl_0_1 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:50.695 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.695 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.695 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.695 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:50.695 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:50.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:17:50.696 00:17:50.696 --- 10.0.0.2 ping statistics --- 00:17:50.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.696 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:17:50.696 00:17:50.696 --- 10.0.0.1 ping statistics --- 00:17:50.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.696 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3832076 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3832076 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3832076 ']' 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:50.696 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:50.957 [2024-11-06 10:09:54.210801] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:17:50.957 [2024-11-06 10:09:54.210875] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.957 [2024-11-06 10:09:54.304434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.957 [2024-11-06 10:09:54.344100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.957 [2024-11-06 10:09:54.344137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.957 [2024-11-06 10:09:54.344146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.957 [2024-11-06 10:09:54.344152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.957 [2024-11-06 10:09:54.344158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.957 [2024-11-06 10:09:54.344811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.527 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:51.527 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:17:51.527 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:51.527 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.527 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:51.788 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.788 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:51.788 [2024-11-06 10:09:55.194339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.788 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:51.788 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:51.788 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:52.047 Malloc1 00:17:52.047 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:52.306 Malloc2 00:17:52.306 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:52.306 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:52.566 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.566 [2024-11-06 10:09:56.025626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.566 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:52.566 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6daff7ad-8b93-429c-b07d-fe34c62daac6 -a 10.0.0.2 -s 4420 -i 4 00:17:52.826 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:52.826 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:17:52.826 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.826 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:52.826 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:55.371 [ 0]:0x1 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d951f3619a1f478d8a3067118cf3ae7c 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d951f3619a1f478d8a3067118cf3ae7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:55.371 [ 0]:0x1 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d951f3619a1f478d8a3067118cf3ae7c 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d951f3619a1f478d8a3067118cf3ae7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:55.371 [ 1]:0x2 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=60244a5c392348c2889eacaf32e11803 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 60244a5c392348c2889eacaf32e11803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:55.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.371 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:55.631 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:55.891 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:55.891 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6daff7ad-8b93-429c-b07d-fe34c62daac6 -a 10.0.0.2 -s 4420 -i 4 00:17:55.891 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:55.891 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:17:55.891 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.891 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:17:55.891 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:17:55.891 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:17:57.802 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:57.802 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:57.802 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:58.063 [ 0]:0x2 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=60244a5c392348c2889eacaf32e11803 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 60244a5c392348c2889eacaf32e11803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.063 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:58.323 [ 0]:0x1 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d951f3619a1f478d8a3067118cf3ae7c 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d951f3619a1f478d8a3067118cf3ae7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:58.323 [ 1]:0x2 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=60244a5c392348c2889eacaf32e11803 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 60244a5c392348c2889eacaf32e11803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.323 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:58.583 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:58.583 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:58.583 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:58.583 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:58.583 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.583 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:58.583 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.583 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:58.583 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.583 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:58.584 [ 0]:0x2 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:58.584 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.844 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=60244a5c392348c2889eacaf32e11803 00:17:58.844 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 60244a5c392348c2889eacaf32e11803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.844 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:58.844 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.844 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:59.104 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:59.104 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6daff7ad-8b93-429c-b07d-fe34c62daac6 -a 10.0.0.2 -s 4420 -i 4 00:17:59.104 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:59.104 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:17:59.104 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.104 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:17:59.104 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:17:59.104 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:01.647 [ 0]:0x1 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d951f3619a1f478d8a3067118cf3ae7c 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d951f3619a1f478d8a3067118cf3ae7c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:01.647 [ 1]:0x2 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=60244a5c392348c2889eacaf32e11803 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 60244a5c392348c2889eacaf32e11803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.647 10:10:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:01.647 [ 0]:0x2 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=60244a5c392348c2889eacaf32e11803 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 60244a5c392348c2889eacaf32e11803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:01.647 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:01.908 [2024-11-06 10:10:05.208247] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:01.908 request: 00:18:01.908 { 00:18:01.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.908 "nsid": 2, 00:18:01.908 "host": "nqn.2016-06.io.spdk:host1", 00:18:01.908 "method": "nvmf_ns_remove_host", 00:18:01.908 "req_id": 1 00:18:01.908 } 00:18:01.908 Got JSON-RPC error response 00:18:01.908 response: 00:18:01.908 { 00:18:01.908 "code": -32602, 00:18:01.908 "message": "Invalid parameters" 00:18:01.908 } 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:01.908 [ 0]:0x2 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=60244a5c392348c2889eacaf32e11803 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 60244a5c392348c2889eacaf32e11803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:01.908 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:02.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.170 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3834289 00:18:02.170 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:02.170 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.170 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3834289 /var/tmp/host.sock 00:18:02.170 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3834289 ']' 00:18:02.170 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:18:02.170 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:02.170 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:02.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:02.170 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:02.170 10:10:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:02.170 [2024-11-06 10:10:05.513176] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:02.170 [2024-11-06 10:10:05.513227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3834289 ] 00:18:02.170 [2024-11-06 10:10:05.608819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.170 [2024-11-06 10:10:05.644550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.112 10:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:03.112 10:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:18:03.112 10:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:03.112 10:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:03.372 10:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e6d8f049-7b22-4e6c-b01c-34fde6fc50f4 00:18:03.372 10:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:03.372 10:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E6D8F0497B224E6CB01C34FDE6FC50F4 -i 00:18:03.372 10:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b8b9ae30-917b-4cca-b8bc-e8e73d0320a2 00:18:03.372 10:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:03.372 10:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B8B9AE30917B4CCAB8BCE8E73D0320A2 -i 00:18:03.632 10:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:03.632 10:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:03.891 10:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:03.891 10:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:04.152 nvme0n1 00:18:04.152 10:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:04.152 10:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:04.413 nvme1n2 00:18:04.413 10:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:04.413 10:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:04.413 10:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:04.413 10:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:04.413 10:10:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:04.673 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:04.673 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:04.673 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:04.674 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:04.933 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e6d8f049-7b22-4e6c-b01c-34fde6fc50f4 == \e\6\d\8\f\0\4\9\-\7\b\2\2\-\4\e\6\c\-\b\0\1\c\-\3\4\f\d\e\6\f\c\5\0\f\4 ]] 00:18:04.933 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:04.933 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:04.933 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:04.933 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b8b9ae30-917b-4cca-b8bc-e8e73d0320a2 == \b\8\b\9\a\e\3\0\-\9\1\7\b\-\4\c\c\a\-\b\8\b\c\-\e\8\e\7\3\d\0\3\2\0\a\2 ]] 00:18:04.934 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:05.194 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid e6d8f049-7b22-4e6c-b01c-34fde6fc50f4 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E6D8F0497B224E6CB01C34FDE6FC50F4 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E6D8F0497B224E6CB01C34FDE6FC50F4 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E6D8F0497B224E6CB01C34FDE6FC50F4 00:18:05.454 [2024-11-06 10:10:08.874661] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:05.454 [2024-11-06 10:10:08.874697] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:05.454 [2024-11-06 10:10:08.874707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.454 request: 00:18:05.454 { 00:18:05.454 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.454 "namespace": { 00:18:05.454 "bdev_name": "invalid", 00:18:05.454 "nsid": 1, 00:18:05.454 "nguid": "E6D8F0497B224E6CB01C34FDE6FC50F4", 00:18:05.454 "no_auto_visible": false 00:18:05.454 }, 00:18:05.454 "method": "nvmf_subsystem_add_ns", 00:18:05.454 "req_id": 1 00:18:05.454 } 00:18:05.454 Got JSON-RPC error response 00:18:05.454 response: 00:18:05.454 { 00:18:05.454 "code": -32602, 00:18:05.454 "message": "Invalid parameters" 00:18:05.454 } 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid e6d8f049-7b22-4e6c-b01c-34fde6fc50f4 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:05.454 10:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E6D8F0497B224E6CB01C34FDE6FC50F4 -i 00:18:05.714 10:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:07.624 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:07.624 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:07.624 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3834289 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3834289 ']' 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3834289 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3834289 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3834289' 00:18:07.884 killing process with pid 3834289 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3834289 00:18:07.884 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3834289 00:18:08.144 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:08.404 rmmod nvme_tcp 00:18:08.404 rmmod nvme_fabrics 00:18:08.404 rmmod nvme_keyring 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3832076 ']' 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3832076 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3832076 ']' 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3832076 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3832076 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3832076' 00:18:08.404 killing process with pid 3832076 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3832076 00:18:08.404 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3832076 00:18:08.665 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:08.665 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:08.665 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:08.665 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:08.665 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:08.665 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:08.665 10:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:08.665 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:08.665 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:08.665 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.665 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.665 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.576 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:10.576 00:18:10.576 real 0m28.405s 00:18:10.576 user 0m31.206s 00:18:10.576 sys 0m8.684s 00:18:10.576 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:10.576 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.576 ************************************ 00:18:10.576 END TEST nvmf_ns_masking 00:18:10.576 ************************************ 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.836 ************************************ 00:18:10.836 START TEST nvmf_nvme_cli 00:18:10.836 ************************************ 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:10.836 * Looking for test storage... 00:18:10.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.836 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.837 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:11.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.098 --rc genhtml_branch_coverage=1 00:18:11.098 --rc genhtml_function_coverage=1 00:18:11.098 --rc genhtml_legend=1 00:18:11.098 --rc geninfo_all_blocks=1 00:18:11.098 --rc geninfo_unexecuted_blocks=1 00:18:11.098 00:18:11.098 ' 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:11.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.098 --rc genhtml_branch_coverage=1 00:18:11.098 --rc genhtml_function_coverage=1 00:18:11.098 --rc genhtml_legend=1 00:18:11.098 --rc geninfo_all_blocks=1 00:18:11.098 --rc geninfo_unexecuted_blocks=1 00:18:11.098 00:18:11.098 ' 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:11.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.098 --rc genhtml_branch_coverage=1 00:18:11.098 --rc genhtml_function_coverage=1 00:18:11.098 --rc genhtml_legend=1 00:18:11.098 --rc geninfo_all_blocks=1 00:18:11.098 --rc geninfo_unexecuted_blocks=1 00:18:11.098 00:18:11.098 ' 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:11.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.098 --rc genhtml_branch_coverage=1 00:18:11.098 --rc genhtml_function_coverage=1 00:18:11.098 --rc genhtml_legend=1 00:18:11.098 --rc geninfo_all_blocks=1 00:18:11.098 --rc geninfo_unexecuted_blocks=1 00:18:11.098 00:18:11.098 ' 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:11.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:11.098 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:11.099 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:19.233 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:19.233 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.233 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:19.234 Found net devices under 0000:31:00.0: cvl_0_0 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:19.234 Found net devices under 0000:31:00.1: cvl_0_1 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.234 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:19.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:18:19.495 00:18:19.495 --- 10.0.0.2 ping statistics --- 00:18:19.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.495 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:18:19.495 00:18:19.495 --- 10.0.0.1 ping statistics --- 00:18:19.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.495 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3840330 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3840330 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3840330 ']' 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:19.495 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.495 [2024-11-06 10:10:22.865765] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:19.495 [2024-11-06 10:10:22.865837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.495 [2024-11-06 10:10:22.957740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.756 [2024-11-06 10:10:23.001223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.756 [2024-11-06 10:10:23.001262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.756 [2024-11-06 10:10:23.001270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.756 [2024-11-06 10:10:23.001277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.756 [2024-11-06 10:10:23.001283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.756 [2024-11-06 10:10:23.002898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.756 [2024-11-06 10:10:23.003113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.756 [2024-11-06 10:10:23.003113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.756 [2024-11-06 10:10:23.002980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 [2024-11-06 10:10:23.710674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 Malloc0 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 Malloc1 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 [2024-11-06 10:10:23.807680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.327 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:18:20.587 00:18:20.587 Discovery Log Number of Records 2, Generation counter 2 00:18:20.587 =====Discovery Log Entry 0====== 00:18:20.587 trtype: tcp 00:18:20.587 adrfam: ipv4 00:18:20.587 subtype: current discovery subsystem 00:18:20.587 treq: not required 00:18:20.587 portid: 0 00:18:20.588 trsvcid: 4420 00:18:20.588 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:20.588 traddr: 10.0.0.2 00:18:20.588 eflags: explicit discovery connections, duplicate discovery information 00:18:20.588 sectype: none 00:18:20.588 =====Discovery Log Entry 1====== 00:18:20.588 trtype: tcp 00:18:20.588 adrfam: ipv4 00:18:20.588 subtype: nvme subsystem 00:18:20.588 treq: not required 00:18:20.588 portid: 0 00:18:20.588 trsvcid: 4420 00:18:20.588 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:20.588 traddr: 10.0.0.2 00:18:20.588 eflags: none 00:18:20.588 sectype: none 00:18:20.588 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:20.588 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:20.588 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:20.588 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.588 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:20.588 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:20.588 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.588 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:20.588 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.588 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:20.588 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:22.518 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:22.518 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:18:22.518 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.518 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:18:22.519 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:18:22.519 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:24.464 /dev/nvme0n2 ]] 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:24.464 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:24.465 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:24.769 rmmod nvme_tcp 00:18:24.769 rmmod nvme_fabrics 00:18:24.769 rmmod nvme_keyring 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:24.769 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:24.770 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3840330 ']' 00:18:24.770 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3840330 00:18:24.770 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3840330 ']' 00:18:24.770 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3840330 00:18:24.770 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3840330 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3840330' 00:18:25.034 killing process with pid 3840330 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3840330 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3840330 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.034 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:27.578 00:18:27.578 real 0m16.395s 00:18:27.578 user 0m24.237s 00:18:27.578 sys 0m7.000s 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:27.578 ************************************ 00:18:27.578 END TEST nvmf_nvme_cli 00:18:27.578 ************************************ 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:27.578 ************************************ 00:18:27.578 START TEST nvmf_vfio_user 00:18:27.578 ************************************ 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:27.578 * Looking for test storage... 00:18:27.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:27.578 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:27.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.579 --rc genhtml_branch_coverage=1 00:18:27.579 --rc genhtml_function_coverage=1 00:18:27.579 --rc genhtml_legend=1 00:18:27.579 --rc geninfo_all_blocks=1 00:18:27.579 --rc geninfo_unexecuted_blocks=1 00:18:27.579 00:18:27.579 ' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:27.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.579 --rc genhtml_branch_coverage=1 00:18:27.579 --rc genhtml_function_coverage=1 00:18:27.579 --rc genhtml_legend=1 00:18:27.579 --rc geninfo_all_blocks=1 00:18:27.579 --rc geninfo_unexecuted_blocks=1 00:18:27.579 00:18:27.579 ' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:27.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.579 --rc genhtml_branch_coverage=1 00:18:27.579 --rc genhtml_function_coverage=1 00:18:27.579 --rc genhtml_legend=1 00:18:27.579 --rc geninfo_all_blocks=1 00:18:27.579 --rc geninfo_unexecuted_blocks=1 00:18:27.579 00:18:27.579 ' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:27.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.579 --rc genhtml_branch_coverage=1 00:18:27.579 --rc genhtml_function_coverage=1 00:18:27.579 --rc genhtml_legend=1 00:18:27.579 --rc geninfo_all_blocks=1 00:18:27.579 --rc geninfo_unexecuted_blocks=1 00:18:27.579 00:18:27.579 ' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:27.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3842150 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3842150' 00:18:27.579 Process pid: 3842150 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3842150 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3842150 ']' 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:27.579 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:27.579 [2024-11-06 10:10:30.872494] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:27.579 [2024-11-06 10:10:30.872546] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.579 [2024-11-06 10:10:30.951851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.579 [2024-11-06 10:10:30.987781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.580 [2024-11-06 10:10:30.987813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.580 [2024-11-06 10:10:30.987820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.580 [2024-11-06 10:10:30.987827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.580 [2024-11-06 10:10:30.987833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.580 [2024-11-06 10:10:30.989448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.580 [2024-11-06 10:10:30.989585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.580 [2024-11-06 10:10:30.989744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.580 [2024-11-06 10:10:30.989745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:28.522 10:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:28.522 10:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:18:28.522 10:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:29.465 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:29.465 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:29.465 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:29.465 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:29.465 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:29.465 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:29.725 Malloc1 00:18:29.725 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:29.985 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:29.985 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:30.245 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:30.245 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:30.245 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:30.505 Malloc2 00:18:30.505 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:30.505 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:30.765 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:31.028 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:31.028 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:31.028 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:31.028 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:31.028 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:31.028 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:31.028 [2024-11-06 10:10:34.396479] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:31.028 [2024-11-06 10:10:34.396527] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3842848 ] 00:18:31.028 [2024-11-06 10:10:34.452025] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:31.028 [2024-11-06 10:10:34.458184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:31.028 [2024-11-06 10:10:34.458206] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8961e6f000 00:18:31.028 [2024-11-06 10:10:34.459177] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:31.028 [2024-11-06 10:10:34.460177] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:31.028 [2024-11-06 10:10:34.461184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:31.028 [2024-11-06 10:10:34.462192] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:31.028 [2024-11-06 10:10:34.463198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:31.028 [2024-11-06 10:10:34.464203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:31.028 [2024-11-06 10:10:34.465217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:31.028 [2024-11-06 10:10:34.466215] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:31.028 [2024-11-06 10:10:34.467233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:31.028 [2024-11-06 10:10:34.467243] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8961e64000 00:18:31.028 [2024-11-06 10:10:34.468571] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:31.028 [2024-11-06 10:10:34.488484] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:31.028 [2024-11-06 10:10:34.488520] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:31.028 [2024-11-06 10:10:34.491367] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:31.028 [2024-11-06 10:10:34.491413] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:31.028 [2024-11-06 10:10:34.491496] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:31.028 [2024-11-06 10:10:34.491514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:31.028 [2024-11-06 10:10:34.491519] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:31.028 [2024-11-06 10:10:34.492367] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:31.028 [2024-11-06 10:10:34.492378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:31.028 [2024-11-06 10:10:34.492385] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:31.028 [2024-11-06 10:10:34.493371] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:31.028 [2024-11-06 10:10:34.493380] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:31.028 [2024-11-06 10:10:34.493388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:31.028 [2024-11-06 10:10:34.494376] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:31.028 [2024-11-06 10:10:34.494384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:31.028 [2024-11-06 10:10:34.495375] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:31.028 [2024-11-06 10:10:34.495383] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:31.028 [2024-11-06 10:10:34.495389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:31.028 [2024-11-06 10:10:34.495396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:31.028 [2024-11-06 10:10:34.495504] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:31.028 [2024-11-06 10:10:34.495509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:31.028 [2024-11-06 10:10:34.495514] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:31.029 [2024-11-06 10:10:34.496384] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:31.029 [2024-11-06 10:10:34.497389] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:31.029 [2024-11-06 10:10:34.498397] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:31.029 [2024-11-06 10:10:34.499390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:31.029 [2024-11-06 10:10:34.499445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:31.029 [2024-11-06 10:10:34.500404] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:31.029 [2024-11-06 10:10:34.500412] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:31.029 [2024-11-06 10:10:34.500418] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500439] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:31.029 [2024-11-06 10:10:34.500452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500467] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:31.029 [2024-11-06 10:10:34.500472] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:31.029 [2024-11-06 10:10:34.500476] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:31.029 [2024-11-06 10:10:34.500490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:31.029 [2024-11-06 10:10:34.500529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:31.029 [2024-11-06 10:10:34.500539] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:31.029 [2024-11-06 10:10:34.500544] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:31.029 [2024-11-06 10:10:34.500549] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:31.029 [2024-11-06 10:10:34.500554] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:31.029 [2024-11-06 10:10:34.500563] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:31.029 [2024-11-06 10:10:34.500568] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:31.029 [2024-11-06 10:10:34.500573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:31.029 [2024-11-06 10:10:34.500601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:31.029 [2024-11-06 10:10:34.500613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.029 [2024-11-06 10:10:34.500622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.029 [2024-11-06 10:10:34.500630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.029 [2024-11-06 10:10:34.500639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.029 [2024-11-06 10:10:34.500646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:31.029 [2024-11-06 10:10:34.500669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:31.029 [2024-11-06 10:10:34.500677] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:31.029 [2024-11-06 10:10:34.500682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:31.029 [2024-11-06 10:10:34.500714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:31.029 [2024-11-06 10:10:34.500776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500792] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:31.029 [2024-11-06 10:10:34.500797] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:31.029 [2024-11-06 10:10:34.500800] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:31.029 [2024-11-06 10:10:34.500806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:31.029 [2024-11-06 10:10:34.500820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:31.029 [2024-11-06 10:10:34.500830] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:31.029 [2024-11-06 10:10:34.500839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500854] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:31.029 [2024-11-06 10:10:34.500859] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:31.029 [2024-11-06 10:10:34.500866] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:31.029 [2024-11-06 10:10:34.500872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:31.029 [2024-11-06 10:10:34.500886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:31.029 [2024-11-06 10:10:34.500901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500918] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:31.029 [2024-11-06 10:10:34.500922] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:31.029 [2024-11-06 10:10:34.500925] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:31.029 [2024-11-06 10:10:34.500931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:31.029 [2024-11-06 10:10:34.500941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:31.029 [2024-11-06 10:10:34.500949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500987] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:31.029 [2024-11-06 10:10:34.500991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:31.029 [2024-11-06 10:10:34.500997] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:31.029 [2024-11-06 10:10:34.501016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:31.029 [2024-11-06 10:10:34.501026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:31.029 [2024-11-06 10:10:34.501038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:31.029 [2024-11-06 10:10:34.501048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:31.029 [2024-11-06 10:10:34.501059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:31.029 [2024-11-06 10:10:34.501066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:31.029 [2024-11-06 10:10:34.501077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:31.029 [2024-11-06 10:10:34.501085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:31.029 [2024-11-06 10:10:34.501098] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:31.030 [2024-11-06 10:10:34.501103] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:31.030 [2024-11-06 10:10:34.501108] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:31.030 [2024-11-06 10:10:34.501112] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:31.030 [2024-11-06 10:10:34.501115] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:31.030 [2024-11-06 10:10:34.501121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:31.030 [2024-11-06 10:10:34.501129] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:31.030 [2024-11-06 10:10:34.501134] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:31.030 [2024-11-06 10:10:34.501137] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:31.030 [2024-11-06 10:10:34.501143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:31.030 [2024-11-06 10:10:34.501150] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:31.030 [2024-11-06 10:10:34.501155] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:31.030 [2024-11-06 10:10:34.501158] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:31.030 [2024-11-06 10:10:34.501164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:31.030 [2024-11-06 10:10:34.501172] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:31.030 [2024-11-06 10:10:34.501177] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:31.030 [2024-11-06 10:10:34.501180] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:31.030 [2024-11-06 10:10:34.501186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:31.030 [2024-11-06 10:10:34.501193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:31.030 [2024-11-06 10:10:34.501206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:31.030 [2024-11-06 10:10:34.501218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:31.030 [2024-11-06 10:10:34.501225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:31.030 ===================================================== 00:18:31.030 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:31.030 ===================================================== 00:18:31.030 Controller Capabilities/Features 00:18:31.030 ================================ 00:18:31.030 Vendor ID: 4e58 00:18:31.030 Subsystem Vendor ID: 4e58 00:18:31.030 Serial Number: SPDK1 00:18:31.030 Model Number: SPDK bdev Controller 00:18:31.030 Firmware Version: 25.01 00:18:31.030 Recommended Arb Burst: 6 00:18:31.030 IEEE OUI Identifier: 8d 6b 50 00:18:31.030 Multi-path I/O 00:18:31.030 May have multiple subsystem ports: Yes 00:18:31.030 May have multiple controllers: Yes 00:18:31.030 Associated with SR-IOV VF: No 00:18:31.030 Max Data Transfer Size: 131072 00:18:31.030 Max Number of Namespaces: 32 00:18:31.030 Max Number of I/O Queues: 127 00:18:31.030 NVMe Specification Version (VS): 1.3 00:18:31.030 NVMe Specification Version (Identify): 1.3 00:18:31.030 Maximum Queue Entries: 256 00:18:31.030 Contiguous Queues Required: Yes 00:18:31.030 Arbitration Mechanisms Supported 00:18:31.030 Weighted Round Robin: Not Supported 00:18:31.030 Vendor Specific: Not Supported 00:18:31.030 Reset Timeout: 15000 ms 00:18:31.030 Doorbell Stride: 4 bytes 00:18:31.030 NVM Subsystem Reset: Not Supported 00:18:31.030 Command Sets Supported 00:18:31.030 NVM Command Set: Supported 00:18:31.030 Boot Partition: Not Supported 00:18:31.030 Memory Page Size Minimum: 4096 bytes 00:18:31.030 Memory Page Size Maximum: 4096 bytes 00:18:31.030 Persistent Memory Region: Not Supported 00:18:31.030 Optional Asynchronous Events Supported 00:18:31.030 Namespace Attribute Notices: Supported 00:18:31.030 Firmware Activation Notices: Not Supported 00:18:31.030 ANA Change Notices: Not Supported 00:18:31.030 PLE Aggregate Log Change Notices: Not Supported 00:18:31.030 LBA Status Info Alert Notices: Not Supported 00:18:31.030 EGE Aggregate Log Change Notices: Not Supported 00:18:31.030 Normal NVM Subsystem Shutdown event: Not Supported 00:18:31.030 Zone Descriptor Change Notices: Not Supported 00:18:31.030 Discovery Log Change Notices: Not Supported 00:18:31.030 Controller Attributes 00:18:31.030 128-bit Host Identifier: Supported 00:18:31.030 Non-Operational Permissive Mode: Not Supported 00:18:31.030 NVM Sets: Not Supported 00:18:31.030 Read Recovery Levels: Not Supported 00:18:31.030 Endurance Groups: Not Supported 00:18:31.030 Predictable Latency Mode: Not Supported 00:18:31.030 Traffic Based Keep ALive: Not Supported 00:18:31.030 Namespace Granularity: Not Supported 00:18:31.030 SQ Associations: Not Supported 00:18:31.030 UUID List: Not Supported 00:18:31.030 Multi-Domain Subsystem: Not Supported 00:18:31.030 Fixed Capacity Management: Not Supported 00:18:31.030 Variable Capacity Management: Not Supported 00:18:31.030 Delete Endurance Group: Not Supported 00:18:31.030 Delete NVM Set: Not Supported 00:18:31.030 Extended LBA Formats Supported: Not Supported 00:18:31.030 Flexible Data Placement Supported: Not Supported 00:18:31.030 00:18:31.030 Controller Memory Buffer Support 00:18:31.030 ================================ 00:18:31.030 Supported: No 00:18:31.030 00:18:31.030 Persistent Memory Region Support 00:18:31.030 ================================ 00:18:31.030 Supported: No 00:18:31.030 00:18:31.030 Admin Command Set Attributes 00:18:31.030 ============================ 00:18:31.030 Security Send/Receive: Not Supported 00:18:31.030 Format NVM: Not Supported 00:18:31.030 Firmware Activate/Download: Not Supported 00:18:31.030 Namespace Management: Not Supported 00:18:31.030 Device Self-Test: Not Supported 00:18:31.030 Directives: Not Supported 00:18:31.030 NVMe-MI: Not Supported 00:18:31.030 Virtualization Management: Not Supported 00:18:31.030 Doorbell Buffer Config: Not Supported 00:18:31.030 Get LBA Status Capability: Not Supported 00:18:31.030 Command & Feature Lockdown Capability: Not Supported 00:18:31.030 Abort Command Limit: 4 00:18:31.030 Async Event Request Limit: 4 00:18:31.030 Number of Firmware Slots: N/A 00:18:31.030 Firmware Slot 1 Read-Only: N/A 00:18:31.030 Firmware Activation Without Reset: N/A 00:18:31.030 Multiple Update Detection Support: N/A 00:18:31.030 Firmware Update Granularity: No Information Provided 00:18:31.030 Per-Namespace SMART Log: No 00:18:31.030 Asymmetric Namespace Access Log Page: Not Supported 00:18:31.030 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:31.030 Command Effects Log Page: Supported 00:18:31.030 Get Log Page Extended Data: Supported 00:18:31.030 Telemetry Log Pages: Not Supported 00:18:31.030 Persistent Event Log Pages: Not Supported 00:18:31.030 Supported Log Pages Log Page: May Support 00:18:31.030 Commands Supported & Effects Log Page: Not Supported 00:18:31.030 Feature Identifiers & Effects Log Page:May Support 00:18:31.030 NVMe-MI Commands & Effects Log Page: May Support 00:18:31.030 Data Area 4 for Telemetry Log: Not Supported 00:18:31.030 Error Log Page Entries Supported: 128 00:18:31.030 Keep Alive: Supported 00:18:31.030 Keep Alive Granularity: 10000 ms 00:18:31.030 00:18:31.030 NVM Command Set Attributes 00:18:31.030 ========================== 00:18:31.030 Submission Queue Entry Size 00:18:31.030 Max: 64 00:18:31.030 Min: 64 00:18:31.030 Completion Queue Entry Size 00:18:31.030 Max: 16 00:18:31.030 Min: 16 00:18:31.030 Number of Namespaces: 32 00:18:31.030 Compare Command: Supported 00:18:31.030 Write Uncorrectable Command: Not Supported 00:18:31.030 Dataset Management Command: Supported 00:18:31.030 Write Zeroes Command: Supported 00:18:31.030 Set Features Save Field: Not Supported 00:18:31.030 Reservations: Not Supported 00:18:31.030 Timestamp: Not Supported 00:18:31.030 Copy: Supported 00:18:31.030 Volatile Write Cache: Present 00:18:31.030 Atomic Write Unit (Normal): 1 00:18:31.030 Atomic Write Unit (PFail): 1 00:18:31.030 Atomic Compare & Write Unit: 1 00:18:31.030 Fused Compare & Write: Supported 00:18:31.030 Scatter-Gather List 00:18:31.030 SGL Command Set: Supported (Dword aligned) 00:18:31.030 SGL Keyed: Not Supported 00:18:31.030 SGL Bit Bucket Descriptor: Not Supported 00:18:31.030 SGL Metadata Pointer: Not Supported 00:18:31.030 Oversized SGL: Not Supported 00:18:31.030 SGL Metadata Address: Not Supported 00:18:31.030 SGL Offset: Not Supported 00:18:31.030 Transport SGL Data Block: Not Supported 00:18:31.030 Replay Protected Memory Block: Not Supported 00:18:31.030 00:18:31.030 Firmware Slot Information 00:18:31.030 ========================= 00:18:31.030 Active slot: 1 00:18:31.030 Slot 1 Firmware Revision: 25.01 00:18:31.030 00:18:31.030 00:18:31.030 Commands Supported and Effects 00:18:31.030 ============================== 00:18:31.030 Admin Commands 00:18:31.030 -------------- 00:18:31.031 Get Log Page (02h): Supported 00:18:31.031 Identify (06h): Supported 00:18:31.031 Abort (08h): Supported 00:18:31.031 Set Features (09h): Supported 00:18:31.031 Get Features (0Ah): Supported 00:18:31.031 Asynchronous Event Request (0Ch): Supported 00:18:31.031 Keep Alive (18h): Supported 00:18:31.031 I/O Commands 00:18:31.031 ------------ 00:18:31.031 Flush (00h): Supported LBA-Change 00:18:31.031 Write (01h): Supported LBA-Change 00:18:31.031 Read (02h): Supported 00:18:31.031 Compare (05h): Supported 00:18:31.031 Write Zeroes (08h): Supported LBA-Change 00:18:31.031 Dataset Management (09h): Supported LBA-Change 00:18:31.031 Copy (19h): Supported LBA-Change 00:18:31.031 00:18:31.031 Error Log 00:18:31.031 ========= 00:18:31.031 00:18:31.031 Arbitration 00:18:31.031 =========== 00:18:31.031 Arbitration Burst: 1 00:18:31.031 00:18:31.031 Power Management 00:18:31.031 ================ 00:18:31.031 Number of Power States: 1 00:18:31.031 Current Power State: Power State #0 00:18:31.031 Power State #0: 00:18:31.031 Max Power: 0.00 W 00:18:31.031 Non-Operational State: Operational 00:18:31.031 Entry Latency: Not Reported 00:18:31.031 Exit Latency: Not Reported 00:18:31.031 Relative Read Throughput: 0 00:18:31.031 Relative Read Latency: 0 00:18:31.031 Relative Write Throughput: 0 00:18:31.031 Relative Write Latency: 0 00:18:31.031 Idle Power: Not Reported 00:18:31.031 Active Power: Not Reported 00:18:31.031 Non-Operational Permissive Mode: Not Supported 00:18:31.031 00:18:31.031 Health Information 00:18:31.031 ================== 00:18:31.031 Critical Warnings: 00:18:31.031 Available Spare Space: OK 00:18:31.031 Temperature: OK 00:18:31.031 Device Reliability: OK 00:18:31.031 Read Only: No 00:18:31.031 Volatile Memory Backup: OK 00:18:31.031 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:31.031 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:31.031 Available Spare: 0% 00:18:31.031 Available Sp[2024-11-06 10:10:34.501327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:31.031 [2024-11-06 10:10:34.501336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:31.031 [2024-11-06 10:10:34.501365] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:31.031 [2024-11-06 10:10:34.501375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.031 [2024-11-06 10:10:34.501382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.031 [2024-11-06 10:10:34.501389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.031 [2024-11-06 10:10:34.501395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.031 [2024-11-06 10:10:34.503869] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:31.031 [2024-11-06 10:10:34.503883] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:31.031 [2024-11-06 10:10:34.504422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:31.031 [2024-11-06 10:10:34.504465] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:31.031 [2024-11-06 10:10:34.504472] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:31.031 [2024-11-06 10:10:34.505425] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:31.031 [2024-11-06 10:10:34.505436] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:31.031 [2024-11-06 10:10:34.505494] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:31.031 [2024-11-06 10:10:34.508869] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:31.291 are Threshold: 0% 00:18:31.291 Life Percentage Used: 0% 00:18:31.291 Data Units Read: 0 00:18:31.291 Data Units Written: 0 00:18:31.291 Host Read Commands: 0 00:18:31.291 Host Write Commands: 0 00:18:31.291 Controller Busy Time: 0 minutes 00:18:31.291 Power Cycles: 0 00:18:31.291 Power On Hours: 0 hours 00:18:31.291 Unsafe Shutdowns: 0 00:18:31.291 Unrecoverable Media Errors: 0 00:18:31.291 Lifetime Error Log Entries: 0 00:18:31.291 Warning Temperature Time: 0 minutes 00:18:31.291 Critical Temperature Time: 0 minutes 00:18:31.291 00:18:31.291 Number of Queues 00:18:31.291 ================ 00:18:31.291 Number of I/O Submission Queues: 127 00:18:31.291 Number of I/O Completion Queues: 127 00:18:31.291 00:18:31.291 Active Namespaces 00:18:31.291 ================= 00:18:31.291 Namespace ID:1 00:18:31.291 Error Recovery Timeout: Unlimited 00:18:31.291 Command Set Identifier: NVM (00h) 00:18:31.292 Deallocate: Supported 00:18:31.292 Deallocated/Unwritten Error: Not Supported 00:18:31.292 Deallocated Read Value: Unknown 00:18:31.292 Deallocate in Write Zeroes: Not Supported 00:18:31.292 Deallocated Guard Field: 0xFFFF 00:18:31.292 Flush: Supported 00:18:31.292 Reservation: Supported 00:18:31.292 Namespace Sharing Capabilities: Multiple Controllers 00:18:31.292 Size (in LBAs): 131072 (0GiB) 00:18:31.292 Capacity (in LBAs): 131072 (0GiB) 00:18:31.292 Utilization (in LBAs): 131072 (0GiB) 00:18:31.292 NGUID: 7DB53B4E34B84DDBB7050DE04FCC72E0 00:18:31.292 UUID: 7db53b4e-34b8-4ddb-b705-0de04fcc72e0 00:18:31.292 Thin Provisioning: Not Supported 00:18:31.292 Per-NS Atomic Units: Yes 00:18:31.292 Atomic Boundary Size (Normal): 0 00:18:31.292 Atomic Boundary Size (PFail): 0 00:18:31.292 Atomic Boundary Offset: 0 00:18:31.292 Maximum Single Source Range Length: 65535 00:18:31.292 Maximum Copy Length: 65535 00:18:31.292 Maximum Source Range Count: 1 00:18:31.292 NGUID/EUI64 Never Reused: No 00:18:31.292 Namespace Write Protected: No 00:18:31.292 Number of LBA Formats: 1 00:18:31.292 Current LBA Format: LBA Format #00 00:18:31.292 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:31.292 00:18:31.292 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:31.292 [2024-11-06 10:10:34.704541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:36.579 Initializing NVMe Controllers 00:18:36.579 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:36.579 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:36.579 Initialization complete. Launching workers. 00:18:36.579 ======================================================== 00:18:36.579 Latency(us) 00:18:36.579 Device Information : IOPS MiB/s Average min max 00:18:36.579 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40036.01 156.39 3197.32 850.88 6922.94 00:18:36.579 ======================================================== 00:18:36.579 Total : 40036.01 156.39 3197.32 850.88 6922.94 00:18:36.579 00:18:36.579 [2024-11-06 10:10:39.725490] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:36.579 10:10:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:36.580 [2024-11-06 10:10:39.916363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:41.865 Initializing NVMe Controllers 00:18:41.865 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:41.865 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:41.865 Initialization complete. Launching workers. 00:18:41.865 ======================================================== 00:18:41.865 Latency(us) 00:18:41.865 Device Information : IOPS MiB/s Average min max 00:18:41.865 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16033.53 62.63 7988.82 5989.79 15963.71 00:18:41.865 ======================================================== 00:18:41.865 Total : 16033.53 62.63 7988.82 5989.79 15963.71 00:18:41.865 00:18:41.865 [2024-11-06 10:10:44.956317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:41.865 10:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:41.865 [2024-11-06 10:10:45.172246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:47.151 [2024-11-06 10:10:50.285227] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:47.151 Initializing NVMe Controllers 00:18:47.151 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:47.151 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:47.151 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:47.151 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:47.151 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:47.151 Initialization complete. Launching workers. 00:18:47.151 Starting thread on core 2 00:18:47.151 Starting thread on core 3 00:18:47.151 Starting thread on core 1 00:18:47.151 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:47.151 [2024-11-06 10:10:50.573269] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:50.447 [2024-11-06 10:10:53.643771] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:50.447 Initializing NVMe Controllers 00:18:50.447 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:50.447 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:50.447 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:50.447 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:50.447 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:50.447 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:50.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:50.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:50.447 Initialization complete. Launching workers. 00:18:50.447 Starting thread on core 1 with urgent priority queue 00:18:50.447 Starting thread on core 2 with urgent priority queue 00:18:50.447 Starting thread on core 3 with urgent priority queue 00:18:50.447 Starting thread on core 0 with urgent priority queue 00:18:50.447 SPDK bdev Controller (SPDK1 ) core 0: 8306.67 IO/s 12.04 secs/100000 ios 00:18:50.447 SPDK bdev Controller (SPDK1 ) core 1: 14642.00 IO/s 6.83 secs/100000 ios 00:18:50.447 SPDK bdev Controller (SPDK1 ) core 2: 9585.00 IO/s 10.43 secs/100000 ios 00:18:50.447 SPDK bdev Controller (SPDK1 ) core 3: 15618.67 IO/s 6.40 secs/100000 ios 00:18:50.447 ======================================================== 00:18:50.447 00:18:50.447 10:10:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:50.447 [2024-11-06 10:10:53.942346] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:50.707 Initializing NVMe Controllers 00:18:50.707 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:50.707 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:50.707 Namespace ID: 1 size: 0GB 00:18:50.707 Initialization complete. 00:18:50.707 INFO: using host memory buffer for IO 00:18:50.707 Hello world! 00:18:50.707 [2024-11-06 10:10:53.975528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:50.707 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:50.967 [2024-11-06 10:10:54.269222] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:51.907 Initializing NVMe Controllers 00:18:51.907 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:51.907 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:51.907 Initialization complete. Launching workers. 00:18:51.907 submit (in ns) avg, min, max = 9520.7, 3896.7, 4001251.7 00:18:51.907 complete (in ns) avg, min, max = 16988.8, 2376.7, 4001045.8 00:18:51.907 00:18:51.907 Submit histogram 00:18:51.907 ================ 00:18:51.907 Range in us Cumulative Count 00:18:51.907 3.893 - 3.920: 0.9458% ( 180) 00:18:51.907 3.920 - 3.947: 5.1337% ( 797) 00:18:51.907 3.947 - 3.973: 13.3887% ( 1571) 00:18:51.907 3.973 - 4.000: 24.2394% ( 2065) 00:18:51.907 4.000 - 4.027: 37.1919% ( 2465) 00:18:51.907 4.027 - 4.053: 51.2953% ( 2684) 00:18:51.907 4.053 - 4.080: 68.1152% ( 3201) 00:18:51.907 4.080 - 4.107: 83.2799% ( 2886) 00:18:51.907 4.107 - 4.133: 91.8134% ( 1624) 00:18:51.907 4.133 - 4.160: 96.2114% ( 837) 00:18:51.907 4.160 - 4.187: 98.3763% ( 412) 00:18:51.907 4.187 - 4.213: 99.1120% ( 140) 00:18:51.907 4.213 - 4.240: 99.3169% ( 39) 00:18:51.907 4.240 - 4.267: 99.3905% ( 14) 00:18:51.907 4.267 - 4.293: 99.4220% ( 6) 00:18:51.907 4.293 - 4.320: 99.4273% ( 1) 00:18:51.907 4.400 - 4.427: 99.4325% ( 1) 00:18:51.907 4.507 - 4.533: 99.4378% ( 1) 00:18:51.907 4.640 - 4.667: 99.4483% ( 2) 00:18:51.907 4.720 - 4.747: 99.4535% ( 1) 00:18:51.907 4.800 - 4.827: 99.4588% ( 1) 00:18:51.907 4.960 - 4.987: 99.4640% ( 1) 00:18:51.907 5.040 - 5.067: 99.4693% ( 1) 00:18:51.907 5.173 - 5.200: 99.4745% ( 1) 00:18:51.907 5.333 - 5.360: 99.4798% ( 1) 00:18:51.907 5.547 - 5.573: 99.4851% ( 1) 00:18:51.907 5.573 - 5.600: 99.4903% ( 1) 00:18:51.907 5.653 - 5.680: 99.4956% ( 1) 00:18:51.907 5.680 - 5.707: 99.5008% ( 1) 00:18:51.907 5.733 - 5.760: 99.5113% ( 2) 00:18:51.907 5.893 - 5.920: 99.5166% ( 1) 00:18:51.907 6.000 - 6.027: 99.5218% ( 1) 00:18:51.907 6.027 - 6.053: 99.5271% ( 1) 00:18:51.907 6.107 - 6.133: 99.5323% ( 1) 00:18:51.907 6.133 - 6.160: 99.5376% ( 1) 00:18:51.907 6.747 - 6.773: 99.5429% ( 1) 00:18:51.907 6.933 - 6.987: 99.5481% ( 1) 00:18:51.907 6.987 - 7.040: 99.5534% ( 1) 00:18:51.907 7.040 - 7.093: 99.5691% ( 3) 00:18:51.907 7.093 - 7.147: 99.5796% ( 2) 00:18:51.907 7.147 - 7.200: 99.5954% ( 3) 00:18:51.907 7.200 - 7.253: 99.6007% ( 1) 00:18:51.907 7.307 - 7.360: 99.6112% ( 2) 00:18:51.907 7.360 - 7.413: 99.6217% ( 2) 00:18:51.907 7.413 - 7.467: 99.6479% ( 5) 00:18:51.907 7.467 - 7.520: 99.6585% ( 2) 00:18:51.907 7.520 - 7.573: 99.6690% ( 2) 00:18:51.907 7.733 - 7.787: 99.6742% ( 1) 00:18:51.907 7.787 - 7.840: 99.6847% ( 2) 00:18:51.907 7.840 - 7.893: 99.7005% ( 3) 00:18:51.907 7.893 - 7.947: 99.7057% ( 1) 00:18:51.907 7.947 - 8.000: 99.7163% ( 2) 00:18:51.907 8.000 - 8.053: 99.7215% ( 1) 00:18:51.907 8.053 - 8.107: 99.7268% ( 1) 00:18:51.907 8.107 - 8.160: 99.7373% ( 2) 00:18:51.907 8.160 - 8.213: 99.7425% ( 1) 00:18:51.907 8.213 - 8.267: 99.7478% ( 1) 00:18:51.907 8.320 - 8.373: 99.7635% ( 3) 00:18:51.907 8.373 - 8.427: 99.7688% ( 1) 00:18:51.907 8.427 - 8.480: 99.7793% ( 2) 00:18:51.907 8.480 - 8.533: 99.7846% ( 1) 00:18:51.907 8.533 - 8.587: 99.7898% ( 1) 00:18:51.907 8.587 - 8.640: 99.8003% ( 2) 00:18:51.907 8.640 - 8.693: 99.8108% ( 2) 00:18:51.907 8.693 - 8.747: 99.8213% ( 2) 00:18:51.907 9.013 - 9.067: 99.8319% ( 2) 00:18:51.907 9.440 - 9.493: 99.8371% ( 1) 00:18:51.907 9.920 - 9.973: 99.8424% ( 1) 00:18:51.907 10.080 - 10.133: 99.8476% ( 1) 00:18:51.907 10.133 - 10.187: 99.8529% ( 1) 00:18:51.907 11.200 - 11.253: 99.8581% ( 1) 00:18:51.907 13.280 - 13.333: 99.8634% ( 1) 00:18:51.907 3986.773 - 4014.080: 100.0000% ( 26) 00:18:51.907 00:18:51.907 Complete histogram 00:18:51.907 ================== 00:18:51.907 Range in us Cumulative Count 00:18:51.907 2.373 - 2.387: 0.0105% ( 2) 00:18:51.907 2.387 - 2.400: 0.0841% ( 14) 00:18:51.907 2.400 - 2.413: 0.9616% ( 167) 00:18:51.907 2.413 - 2.427: 1.0614% ( 19) 00:18:51.907 2.427 - [2024-11-06 10:10:55.292765] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:51.907 2.440: 1.2243% ( 31) 00:18:51.907 2.440 - 2.453: 1.2506% ( 5) 00:18:51.907 2.453 - 2.467: 8.8382% ( 1444) 00:18:51.907 2.467 - 2.480: 53.5127% ( 8502) 00:18:51.907 2.480 - 2.493: 62.1302% ( 1640) 00:18:51.907 2.493 - 2.507: 73.9268% ( 2245) 00:18:51.907 2.507 - 2.520: 79.2339% ( 1010) 00:18:51.907 2.520 - 2.533: 81.6510% ( 460) 00:18:51.907 2.533 - 2.547: 86.8951% ( 998) 00:18:51.907 2.547 - 2.560: 93.0587% ( 1173) 00:18:51.907 2.560 - 2.573: 96.1326% ( 585) 00:18:51.907 2.573 - 2.587: 97.8088% ( 319) 00:18:51.907 2.587 - 2.600: 98.8335% ( 195) 00:18:51.907 2.600 - 2.613: 99.2906% ( 87) 00:18:51.907 2.613 - 2.627: 99.4010% ( 21) 00:18:51.907 2.627 - 2.640: 99.4167% ( 3) 00:18:51.907 5.120 - 5.147: 99.4220% ( 1) 00:18:51.907 5.173 - 5.200: 99.4273% ( 1) 00:18:51.907 5.200 - 5.227: 99.4325% ( 1) 00:18:51.907 5.307 - 5.333: 99.4378% ( 1) 00:18:51.907 5.360 - 5.387: 99.4430% ( 1) 00:18:51.907 5.413 - 5.440: 99.4483% ( 1) 00:18:51.907 5.467 - 5.493: 99.4535% ( 1) 00:18:51.907 5.600 - 5.627: 99.4588% ( 1) 00:18:51.907 5.653 - 5.680: 99.4640% ( 1) 00:18:51.907 5.733 - 5.760: 99.4693% ( 1) 00:18:51.907 5.760 - 5.787: 99.4745% ( 1) 00:18:51.907 5.813 - 5.840: 99.4798% ( 1) 00:18:51.907 5.867 - 5.893: 99.4851% ( 1) 00:18:51.907 5.947 - 5.973: 99.4903% ( 1) 00:18:51.907 6.053 - 6.080: 99.5008% ( 2) 00:18:51.907 6.133 - 6.160: 99.5113% ( 2) 00:18:51.907 6.187 - 6.213: 99.5166% ( 1) 00:18:51.907 6.213 - 6.240: 99.5218% ( 1) 00:18:51.907 6.240 - 6.267: 99.5271% ( 1) 00:18:51.907 6.347 - 6.373: 99.5323% ( 1) 00:18:51.907 6.373 - 6.400: 99.5376% ( 1) 00:18:51.907 6.400 - 6.427: 99.5429% ( 1) 00:18:51.907 6.480 - 6.507: 99.5481% ( 1) 00:18:51.907 6.507 - 6.533: 99.5586% ( 2) 00:18:51.907 6.587 - 6.613: 99.5639% ( 1) 00:18:51.907 6.640 - 6.667: 99.5691% ( 1) 00:18:51.907 6.667 - 6.693: 99.5744% ( 1) 00:18:51.907 6.773 - 6.800: 99.5796% ( 1) 00:18:51.907 6.800 - 6.827: 99.5849% ( 1) 00:18:51.907 6.933 - 6.987: 99.5901% ( 1) 00:18:51.907 7.040 - 7.093: 99.5954% ( 1) 00:18:51.907 7.147 - 7.200: 99.6007% ( 1) 00:18:51.907 7.467 - 7.520: 99.6112% ( 2) 00:18:51.907 7.520 - 7.573: 99.6164% ( 1) 00:18:51.907 7.680 - 7.733: 99.6217% ( 1) 00:18:51.907 11.040 - 11.093: 99.6269% ( 1) 00:18:51.907 12.160 - 12.213: 99.6322% ( 1) 00:18:51.907 13.013 - 13.067: 99.6374% ( 1) 00:18:51.907 3986.773 - 4014.080: 100.0000% ( 69) 00:18:51.907 00:18:51.907 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:51.907 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:51.907 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:51.907 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:51.907 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:52.173 [ 00:18:52.173 { 00:18:52.173 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:52.173 "subtype": "Discovery", 00:18:52.173 "listen_addresses": [], 00:18:52.173 "allow_any_host": true, 00:18:52.173 "hosts": [] 00:18:52.173 }, 00:18:52.173 { 00:18:52.173 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:52.173 "subtype": "NVMe", 00:18:52.173 "listen_addresses": [ 00:18:52.173 { 00:18:52.173 "trtype": "VFIOUSER", 00:18:52.173 "adrfam": "IPv4", 00:18:52.173 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:52.173 "trsvcid": "0" 00:18:52.173 } 00:18:52.173 ], 00:18:52.173 "allow_any_host": true, 00:18:52.173 "hosts": [], 00:18:52.173 "serial_number": "SPDK1", 00:18:52.173 "model_number": "SPDK bdev Controller", 00:18:52.173 "max_namespaces": 32, 00:18:52.173 "min_cntlid": 1, 00:18:52.173 "max_cntlid": 65519, 00:18:52.173 "namespaces": [ 00:18:52.173 { 00:18:52.174 "nsid": 1, 00:18:52.174 "bdev_name": "Malloc1", 00:18:52.174 "name": "Malloc1", 00:18:52.174 "nguid": "7DB53B4E34B84DDBB7050DE04FCC72E0", 00:18:52.174 "uuid": "7db53b4e-34b8-4ddb-b705-0de04fcc72e0" 00:18:52.174 } 00:18:52.174 ] 00:18:52.174 }, 00:18:52.174 { 00:18:52.174 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:52.174 "subtype": "NVMe", 00:18:52.174 "listen_addresses": [ 00:18:52.174 { 00:18:52.174 "trtype": "VFIOUSER", 00:18:52.174 "adrfam": "IPv4", 00:18:52.174 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:52.174 "trsvcid": "0" 00:18:52.174 } 00:18:52.174 ], 00:18:52.174 "allow_any_host": true, 00:18:52.174 "hosts": [], 00:18:52.174 "serial_number": "SPDK2", 00:18:52.174 "model_number": "SPDK bdev Controller", 00:18:52.174 "max_namespaces": 32, 00:18:52.174 "min_cntlid": 1, 00:18:52.174 "max_cntlid": 65519, 00:18:52.174 "namespaces": [ 00:18:52.174 { 00:18:52.174 "nsid": 1, 00:18:52.174 "bdev_name": "Malloc2", 00:18:52.174 "name": "Malloc2", 00:18:52.174 "nguid": "4BB07E1DCCA7478A8F78548C2E2C040A", 00:18:52.174 "uuid": "4bb07e1d-cca7-478a-8f78-548c2e2c040a" 00:18:52.174 } 00:18:52.174 ] 00:18:52.174 } 00:18:52.174 ] 00:18:52.174 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:52.174 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:52.174 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3846875 00:18:52.174 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:52.174 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:18:52.174 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:52.174 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:52.174 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:18:52.174 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:52.174 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:52.437 Malloc3 00:18:52.437 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:52.437 [2024-11-06 10:10:55.744489] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:52.437 [2024-11-06 10:10:55.888455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:52.437 10:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:52.437 Asynchronous Event Request test 00:18:52.437 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:52.437 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:52.437 Registering asynchronous event callbacks... 00:18:52.437 Starting namespace attribute notice tests for all controllers... 00:18:52.437 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:52.437 aer_cb - Changed Namespace 00:18:52.437 Cleaning up... 00:18:52.697 [ 00:18:52.697 { 00:18:52.698 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:52.698 "subtype": "Discovery", 00:18:52.698 "listen_addresses": [], 00:18:52.698 "allow_any_host": true, 00:18:52.698 "hosts": [] 00:18:52.698 }, 00:18:52.698 { 00:18:52.698 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:52.698 "subtype": "NVMe", 00:18:52.698 "listen_addresses": [ 00:18:52.698 { 00:18:52.698 "trtype": "VFIOUSER", 00:18:52.698 "adrfam": "IPv4", 00:18:52.698 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:52.698 "trsvcid": "0" 00:18:52.698 } 00:18:52.698 ], 00:18:52.698 "allow_any_host": true, 00:18:52.698 "hosts": [], 00:18:52.698 "serial_number": "SPDK1", 00:18:52.698 "model_number": "SPDK bdev Controller", 00:18:52.698 "max_namespaces": 32, 00:18:52.698 "min_cntlid": 1, 00:18:52.698 "max_cntlid": 65519, 00:18:52.698 "namespaces": [ 00:18:52.698 { 00:18:52.698 "nsid": 1, 00:18:52.698 "bdev_name": "Malloc1", 00:18:52.698 "name": "Malloc1", 00:18:52.698 "nguid": "7DB53B4E34B84DDBB7050DE04FCC72E0", 00:18:52.698 "uuid": "7db53b4e-34b8-4ddb-b705-0de04fcc72e0" 00:18:52.698 }, 00:18:52.698 { 00:18:52.698 "nsid": 2, 00:18:52.698 "bdev_name": "Malloc3", 00:18:52.698 "name": "Malloc3", 00:18:52.698 "nguid": "B7D9FC121F27493D87F27C03E4C3A6BC", 00:18:52.698 "uuid": "b7d9fc12-1f27-493d-87f2-7c03e4c3a6bc" 00:18:52.698 } 00:18:52.698 ] 00:18:52.698 }, 00:18:52.698 { 00:18:52.698 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:52.698 "subtype": "NVMe", 00:18:52.698 "listen_addresses": [ 00:18:52.698 { 00:18:52.698 "trtype": "VFIOUSER", 00:18:52.698 "adrfam": "IPv4", 00:18:52.698 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:52.698 "trsvcid": "0" 00:18:52.698 } 00:18:52.698 ], 00:18:52.698 "allow_any_host": true, 00:18:52.698 "hosts": [], 00:18:52.698 "serial_number": "SPDK2", 00:18:52.698 "model_number": "SPDK bdev Controller", 00:18:52.698 "max_namespaces": 32, 00:18:52.698 "min_cntlid": 1, 00:18:52.698 "max_cntlid": 65519, 00:18:52.698 "namespaces": [ 00:18:52.698 { 00:18:52.698 "nsid": 1, 00:18:52.698 "bdev_name": "Malloc2", 00:18:52.698 "name": "Malloc2", 00:18:52.698 "nguid": "4BB07E1DCCA7478A8F78548C2E2C040A", 00:18:52.698 "uuid": "4bb07e1d-cca7-478a-8f78-548c2e2c040a" 00:18:52.698 } 00:18:52.698 ] 00:18:52.698 } 00:18:52.698 ] 00:18:52.698 10:10:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3846875 00:18:52.698 10:10:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:52.698 10:10:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:52.698 10:10:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:52.698 10:10:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:52.698 [2024-11-06 10:10:56.124333] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:52.698 [2024-11-06 10:10:56.124378] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847135 ] 00:18:52.698 [2024-11-06 10:10:56.176849] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:52.698 [2024-11-06 10:10:56.182087] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:52.698 [2024-11-06 10:10:56.182112] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f382db63000 00:18:52.698 [2024-11-06 10:10:56.183084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:52.698 [2024-11-06 10:10:56.184096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:52.698 [2024-11-06 10:10:56.185101] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:52.698 [2024-11-06 10:10:56.186107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:52.698 [2024-11-06 10:10:56.187110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:52.698 [2024-11-06 10:10:56.188114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:52.698 [2024-11-06 10:10:56.189125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:52.698 [2024-11-06 10:10:56.190134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:52.698 [2024-11-06 10:10:56.191144] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:52.698 [2024-11-06 10:10:56.191155] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f382db58000 00:18:52.698 [2024-11-06 10:10:56.192481] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:52.960 [2024-11-06 10:10:56.212699] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:52.960 [2024-11-06 10:10:56.212724] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:52.960 [2024-11-06 10:10:56.214794] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:52.960 [2024-11-06 10:10:56.214839] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:52.960 [2024-11-06 10:10:56.214929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:52.960 [2024-11-06 10:10:56.214943] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:52.960 [2024-11-06 10:10:56.214948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:52.960 [2024-11-06 10:10:56.215795] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:52.960 [2024-11-06 10:10:56.215804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:52.960 [2024-11-06 10:10:56.215812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:52.960 [2024-11-06 10:10:56.216799] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:52.960 [2024-11-06 10:10:56.216810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:52.960 [2024-11-06 10:10:56.216818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:52.960 [2024-11-06 10:10:56.217804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:52.960 [2024-11-06 10:10:56.217814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:52.960 [2024-11-06 10:10:56.218806] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:52.960 [2024-11-06 10:10:56.218816] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:52.960 [2024-11-06 10:10:56.218821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:52.960 [2024-11-06 10:10:56.218829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:52.960 [2024-11-06 10:10:56.218937] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:52.960 [2024-11-06 10:10:56.218942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:52.960 [2024-11-06 10:10:56.218947] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:52.960 [2024-11-06 10:10:56.219809] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:52.960 [2024-11-06 10:10:56.220815] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:52.960 [2024-11-06 10:10:56.221822] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:52.960 [2024-11-06 10:10:56.222821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:52.960 [2024-11-06 10:10:56.222865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:52.960 [2024-11-06 10:10:56.223830] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:52.960 [2024-11-06 10:10:56.223839] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:52.960 [2024-11-06 10:10:56.223844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:52.960 [2024-11-06 10:10:56.223873] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:52.960 [2024-11-06 10:10:56.223881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:52.960 [2024-11-06 10:10:56.223893] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:52.960 [2024-11-06 10:10:56.223898] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:52.960 [2024-11-06 10:10:56.223902] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:52.960 [2024-11-06 10:10:56.223917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:52.960 [2024-11-06 10:10:56.231873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:52.960 [2024-11-06 10:10:56.231885] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:52.960 [2024-11-06 10:10:56.231890] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:52.960 [2024-11-06 10:10:56.231895] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:52.960 [2024-11-06 10:10:56.231900] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:52.961 [2024-11-06 10:10:56.231907] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:52.961 [2024-11-06 10:10:56.231912] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:52.961 [2024-11-06 10:10:56.231917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.231926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.231937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:52.961 [2024-11-06 10:10:56.239867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:52.961 [2024-11-06 10:10:56.239880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.961 [2024-11-06 10:10:56.239889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.961 [2024-11-06 10:10:56.239897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.961 [2024-11-06 10:10:56.239906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.961 [2024-11-06 10:10:56.239910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.239917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.239927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:52.961 [2024-11-06 10:10:56.247868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:52.961 [2024-11-06 10:10:56.247878] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:52.961 [2024-11-06 10:10:56.247884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.247891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.247897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.247905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:52.961 [2024-11-06 10:10:56.255868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:52.961 [2024-11-06 10:10:56.255933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.255941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.255949] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:52.961 [2024-11-06 10:10:56.255954] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:52.961 [2024-11-06 10:10:56.255957] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:52.961 [2024-11-06 10:10:56.255964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:52.961 [2024-11-06 10:10:56.263867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:52.961 [2024-11-06 10:10:56.263878] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:52.961 [2024-11-06 10:10:56.263889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.263897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.263904] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:52.961 [2024-11-06 10:10:56.263909] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:52.961 [2024-11-06 10:10:56.263913] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:52.961 [2024-11-06 10:10:56.263919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:52.961 [2024-11-06 10:10:56.271869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:52.961 [2024-11-06 10:10:56.271883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.271891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.271899] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:52.961 [2024-11-06 10:10:56.271903] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:52.961 [2024-11-06 10:10:56.271906] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:52.961 [2024-11-06 10:10:56.271912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:52.961 [2024-11-06 10:10:56.279868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:52.961 [2024-11-06 10:10:56.279878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.279885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.279893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.279901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.279906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.279911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.279917] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:52.961 [2024-11-06 10:10:56.279921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:52.961 [2024-11-06 10:10:56.279927] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:52.961 [2024-11-06 10:10:56.279944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:52.961 [2024-11-06 10:10:56.287869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:52.961 [2024-11-06 10:10:56.287883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:52.961 [2024-11-06 10:10:56.295867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:52.961 [2024-11-06 10:10:56.295880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:52.961 [2024-11-06 10:10:56.303868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:52.961 [2024-11-06 10:10:56.303882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:52.961 [2024-11-06 10:10:56.311866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:52.961 [2024-11-06 10:10:56.311882] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:52.961 [2024-11-06 10:10:56.311887] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:52.961 [2024-11-06 10:10:56.311891] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:52.961 [2024-11-06 10:10:56.311895] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:52.961 [2024-11-06 10:10:56.311898] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:52.961 [2024-11-06 10:10:56.311905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:52.961 [2024-11-06 10:10:56.311913] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:52.961 [2024-11-06 10:10:56.311917] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:52.961 [2024-11-06 10:10:56.311920] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:52.961 [2024-11-06 10:10:56.311926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:52.961 [2024-11-06 10:10:56.311934] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:52.961 [2024-11-06 10:10:56.311938] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:52.961 [2024-11-06 10:10:56.311942] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:52.962 [2024-11-06 10:10:56.311949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:52.962 [2024-11-06 10:10:56.311958] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:52.962 [2024-11-06 10:10:56.311962] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:52.962 [2024-11-06 10:10:56.311965] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:52.962 [2024-11-06 10:10:56.311971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:52.962 [2024-11-06 10:10:56.319869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:52.962 [2024-11-06 10:10:56.319884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:52.962 [2024-11-06 10:10:56.319895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:52.962 [2024-11-06 10:10:56.319902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:52.962 ===================================================== 00:18:52.962 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:52.962 ===================================================== 00:18:52.962 Controller Capabilities/Features 00:18:52.962 ================================ 00:18:52.962 Vendor ID: 4e58 00:18:52.962 Subsystem Vendor ID: 4e58 00:18:52.962 Serial Number: SPDK2 00:18:52.962 Model Number: SPDK bdev Controller 00:18:52.962 Firmware Version: 25.01 00:18:52.962 Recommended Arb Burst: 6 00:18:52.962 IEEE OUI Identifier: 8d 6b 50 00:18:52.962 Multi-path I/O 00:18:52.962 May have multiple subsystem ports: Yes 00:18:52.962 May have multiple controllers: Yes 00:18:52.962 Associated with SR-IOV VF: No 00:18:52.962 Max Data Transfer Size: 131072 00:18:52.962 Max Number of Namespaces: 32 00:18:52.962 Max Number of I/O Queues: 127 00:18:52.962 NVMe Specification Version (VS): 1.3 00:18:52.962 NVMe Specification Version (Identify): 1.3 00:18:52.962 Maximum Queue Entries: 256 00:18:52.962 Contiguous Queues Required: Yes 00:18:52.962 Arbitration Mechanisms Supported 00:18:52.962 Weighted Round Robin: Not Supported 00:18:52.962 Vendor Specific: Not Supported 00:18:52.962 Reset Timeout: 15000 ms 00:18:52.962 Doorbell Stride: 4 bytes 00:18:52.962 NVM Subsystem Reset: Not Supported 00:18:52.962 Command Sets Supported 00:18:52.962 NVM Command Set: Supported 00:18:52.962 Boot Partition: Not Supported 00:18:52.962 Memory Page Size Minimum: 4096 bytes 00:18:52.962 Memory Page Size Maximum: 4096 bytes 00:18:52.962 Persistent Memory Region: Not Supported 00:18:52.962 Optional Asynchronous Events Supported 00:18:52.962 Namespace Attribute Notices: Supported 00:18:52.962 Firmware Activation Notices: Not Supported 00:18:52.962 ANA Change Notices: Not Supported 00:18:52.962 PLE Aggregate Log Change Notices: Not Supported 00:18:52.962 LBA Status Info Alert Notices: Not Supported 00:18:52.962 EGE Aggregate Log Change Notices: Not Supported 00:18:52.962 Normal NVM Subsystem Shutdown event: Not Supported 00:18:52.962 Zone Descriptor Change Notices: Not Supported 00:18:52.962 Discovery Log Change Notices: Not Supported 00:18:52.962 Controller Attributes 00:18:52.962 128-bit Host Identifier: Supported 00:18:52.962 Non-Operational Permissive Mode: Not Supported 00:18:52.962 NVM Sets: Not Supported 00:18:52.962 Read Recovery Levels: Not Supported 00:18:52.962 Endurance Groups: Not Supported 00:18:52.962 Predictable Latency Mode: Not Supported 00:18:52.962 Traffic Based Keep ALive: Not Supported 00:18:52.962 Namespace Granularity: Not Supported 00:18:52.962 SQ Associations: Not Supported 00:18:52.962 UUID List: Not Supported 00:18:52.962 Multi-Domain Subsystem: Not Supported 00:18:52.962 Fixed Capacity Management: Not Supported 00:18:52.962 Variable Capacity Management: Not Supported 00:18:52.962 Delete Endurance Group: Not Supported 00:18:52.962 Delete NVM Set: Not Supported 00:18:52.962 Extended LBA Formats Supported: Not Supported 00:18:52.962 Flexible Data Placement Supported: Not Supported 00:18:52.962 00:18:52.962 Controller Memory Buffer Support 00:18:52.962 ================================ 00:18:52.962 Supported: No 00:18:52.962 00:18:52.962 Persistent Memory Region Support 00:18:52.962 ================================ 00:18:52.962 Supported: No 00:18:52.962 00:18:52.962 Admin Command Set Attributes 00:18:52.962 ============================ 00:18:52.962 Security Send/Receive: Not Supported 00:18:52.962 Format NVM: Not Supported 00:18:52.962 Firmware Activate/Download: Not Supported 00:18:52.962 Namespace Management: Not Supported 00:18:52.962 Device Self-Test: Not Supported 00:18:52.962 Directives: Not Supported 00:18:52.962 NVMe-MI: Not Supported 00:18:52.962 Virtualization Management: Not Supported 00:18:52.962 Doorbell Buffer Config: Not Supported 00:18:52.962 Get LBA Status Capability: Not Supported 00:18:52.962 Command & Feature Lockdown Capability: Not Supported 00:18:52.962 Abort Command Limit: 4 00:18:52.962 Async Event Request Limit: 4 00:18:52.962 Number of Firmware Slots: N/A 00:18:52.962 Firmware Slot 1 Read-Only: N/A 00:18:52.962 Firmware Activation Without Reset: N/A 00:18:52.962 Multiple Update Detection Support: N/A 00:18:52.962 Firmware Update Granularity: No Information Provided 00:18:52.962 Per-Namespace SMART Log: No 00:18:52.962 Asymmetric Namespace Access Log Page: Not Supported 00:18:52.962 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:52.962 Command Effects Log Page: Supported 00:18:52.962 Get Log Page Extended Data: Supported 00:18:52.962 Telemetry Log Pages: Not Supported 00:18:52.962 Persistent Event Log Pages: Not Supported 00:18:52.962 Supported Log Pages Log Page: May Support 00:18:52.962 Commands Supported & Effects Log Page: Not Supported 00:18:52.962 Feature Identifiers & Effects Log Page:May Support 00:18:52.962 NVMe-MI Commands & Effects Log Page: May Support 00:18:52.962 Data Area 4 for Telemetry Log: Not Supported 00:18:52.962 Error Log Page Entries Supported: 128 00:18:52.962 Keep Alive: Supported 00:18:52.962 Keep Alive Granularity: 10000 ms 00:18:52.962 00:18:52.962 NVM Command Set Attributes 00:18:52.962 ========================== 00:18:52.962 Submission Queue Entry Size 00:18:52.962 Max: 64 00:18:52.962 Min: 64 00:18:52.962 Completion Queue Entry Size 00:18:52.962 Max: 16 00:18:52.962 Min: 16 00:18:52.962 Number of Namespaces: 32 00:18:52.962 Compare Command: Supported 00:18:52.962 Write Uncorrectable Command: Not Supported 00:18:52.962 Dataset Management Command: Supported 00:18:52.962 Write Zeroes Command: Supported 00:18:52.962 Set Features Save Field: Not Supported 00:18:52.962 Reservations: Not Supported 00:18:52.962 Timestamp: Not Supported 00:18:52.962 Copy: Supported 00:18:52.962 Volatile Write Cache: Present 00:18:52.962 Atomic Write Unit (Normal): 1 00:18:52.962 Atomic Write Unit (PFail): 1 00:18:52.962 Atomic Compare & Write Unit: 1 00:18:52.962 Fused Compare & Write: Supported 00:18:52.962 Scatter-Gather List 00:18:52.962 SGL Command Set: Supported (Dword aligned) 00:18:52.962 SGL Keyed: Not Supported 00:18:52.962 SGL Bit Bucket Descriptor: Not Supported 00:18:52.962 SGL Metadata Pointer: Not Supported 00:18:52.962 Oversized SGL: Not Supported 00:18:52.962 SGL Metadata Address: Not Supported 00:18:52.962 SGL Offset: Not Supported 00:18:52.962 Transport SGL Data Block: Not Supported 00:18:52.962 Replay Protected Memory Block: Not Supported 00:18:52.962 00:18:52.962 Firmware Slot Information 00:18:52.962 ========================= 00:18:52.962 Active slot: 1 00:18:52.962 Slot 1 Firmware Revision: 25.01 00:18:52.962 00:18:52.962 00:18:52.962 Commands Supported and Effects 00:18:52.962 ============================== 00:18:52.962 Admin Commands 00:18:52.962 -------------- 00:18:52.962 Get Log Page (02h): Supported 00:18:52.963 Identify (06h): Supported 00:18:52.963 Abort (08h): Supported 00:18:52.963 Set Features (09h): Supported 00:18:52.963 Get Features (0Ah): Supported 00:18:52.963 Asynchronous Event Request (0Ch): Supported 00:18:52.963 Keep Alive (18h): Supported 00:18:52.963 I/O Commands 00:18:52.963 ------------ 00:18:52.963 Flush (00h): Supported LBA-Change 00:18:52.963 Write (01h): Supported LBA-Change 00:18:52.963 Read (02h): Supported 00:18:52.963 Compare (05h): Supported 00:18:52.963 Write Zeroes (08h): Supported LBA-Change 00:18:52.963 Dataset Management (09h): Supported LBA-Change 00:18:52.963 Copy (19h): Supported LBA-Change 00:18:52.963 00:18:52.963 Error Log 00:18:52.963 ========= 00:18:52.963 00:18:52.963 Arbitration 00:18:52.963 =========== 00:18:52.963 Arbitration Burst: 1 00:18:52.963 00:18:52.963 Power Management 00:18:52.963 ================ 00:18:52.963 Number of Power States: 1 00:18:52.963 Current Power State: Power State #0 00:18:52.963 Power State #0: 00:18:52.963 Max Power: 0.00 W 00:18:52.963 Non-Operational State: Operational 00:18:52.963 Entry Latency: Not Reported 00:18:52.963 Exit Latency: Not Reported 00:18:52.963 Relative Read Throughput: 0 00:18:52.963 Relative Read Latency: 0 00:18:52.963 Relative Write Throughput: 0 00:18:52.963 Relative Write Latency: 0 00:18:52.963 Idle Power: Not Reported 00:18:52.963 Active Power: Not Reported 00:18:52.963 Non-Operational Permissive Mode: Not Supported 00:18:52.963 00:18:52.963 Health Information 00:18:52.963 ================== 00:18:52.963 Critical Warnings: 00:18:52.963 Available Spare Space: OK 00:18:52.963 Temperature: OK 00:18:52.963 Device Reliability: OK 00:18:52.963 Read Only: No 00:18:52.963 Volatile Memory Backup: OK 00:18:52.963 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:52.963 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:52.963 Available Spare: 0% 00:18:52.963 Available Sp[2024-11-06 10:10:56.320004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:52.963 [2024-11-06 10:10:56.327867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:52.963 [2024-11-06 10:10:56.327899] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:52.963 [2024-11-06 10:10:56.327908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.963 [2024-11-06 10:10:56.327915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.963 [2024-11-06 10:10:56.327921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.963 [2024-11-06 10:10:56.327928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.963 [2024-11-06 10:10:56.327978] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:52.963 [2024-11-06 10:10:56.327989] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:52.963 [2024-11-06 10:10:56.328978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:52.963 [2024-11-06 10:10:56.329028] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:52.963 [2024-11-06 10:10:56.329035] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:52.963 [2024-11-06 10:10:56.329982] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:52.963 [2024-11-06 10:10:56.329994] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:52.963 [2024-11-06 10:10:56.330042] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:52.963 [2024-11-06 10:10:56.331419] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:52.963 are Threshold: 0% 00:18:52.963 Life Percentage Used: 0% 00:18:52.963 Data Units Read: 0 00:18:52.963 Data Units Written: 0 00:18:52.963 Host Read Commands: 0 00:18:52.963 Host Write Commands: 0 00:18:52.963 Controller Busy Time: 0 minutes 00:18:52.963 Power Cycles: 0 00:18:52.963 Power On Hours: 0 hours 00:18:52.963 Unsafe Shutdowns: 0 00:18:52.963 Unrecoverable Media Errors: 0 00:18:52.963 Lifetime Error Log Entries: 0 00:18:52.963 Warning Temperature Time: 0 minutes 00:18:52.963 Critical Temperature Time: 0 minutes 00:18:52.963 00:18:52.963 Number of Queues 00:18:52.963 ================ 00:18:52.963 Number of I/O Submission Queues: 127 00:18:52.963 Number of I/O Completion Queues: 127 00:18:52.963 00:18:52.963 Active Namespaces 00:18:52.963 ================= 00:18:52.963 Namespace ID:1 00:18:52.963 Error Recovery Timeout: Unlimited 00:18:52.963 Command Set Identifier: NVM (00h) 00:18:52.963 Deallocate: Supported 00:18:52.963 Deallocated/Unwritten Error: Not Supported 00:18:52.963 Deallocated Read Value: Unknown 00:18:52.963 Deallocate in Write Zeroes: Not Supported 00:18:52.963 Deallocated Guard Field: 0xFFFF 00:18:52.963 Flush: Supported 00:18:52.963 Reservation: Supported 00:18:52.963 Namespace Sharing Capabilities: Multiple Controllers 00:18:52.963 Size (in LBAs): 131072 (0GiB) 00:18:52.963 Capacity (in LBAs): 131072 (0GiB) 00:18:52.963 Utilization (in LBAs): 131072 (0GiB) 00:18:52.963 NGUID: 4BB07E1DCCA7478A8F78548C2E2C040A 00:18:52.963 UUID: 4bb07e1d-cca7-478a-8f78-548c2e2c040a 00:18:52.963 Thin Provisioning: Not Supported 00:18:52.963 Per-NS Atomic Units: Yes 00:18:52.963 Atomic Boundary Size (Normal): 0 00:18:52.963 Atomic Boundary Size (PFail): 0 00:18:52.963 Atomic Boundary Offset: 0 00:18:52.963 Maximum Single Source Range Length: 65535 00:18:52.963 Maximum Copy Length: 65535 00:18:52.963 Maximum Source Range Count: 1 00:18:52.963 NGUID/EUI64 Never Reused: No 00:18:52.964 Namespace Write Protected: No 00:18:52.964 Number of LBA Formats: 1 00:18:52.964 Current LBA Format: LBA Format #00 00:18:52.964 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:52.964 00:18:52.964 10:10:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:53.223 [2024-11-06 10:10:56.534223] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:58.505 Initializing NVMe Controllers 00:18:58.505 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:58.505 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:58.505 Initialization complete. Launching workers. 00:18:58.505 ======================================================== 00:18:58.505 Latency(us) 00:18:58.505 Device Information : IOPS MiB/s Average min max 00:18:58.506 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39981.36 156.18 3201.17 846.74 7233.72 00:18:58.506 ======================================================== 00:18:58.506 Total : 39981.36 156.18 3201.17 846.74 7233.72 00:18:58.506 00:18:58.506 [2024-11-06 10:11:01.640065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:58.506 10:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:58.506 [2024-11-06 10:11:01.831652] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:03.793 Initializing NVMe Controllers 00:19:03.793 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:03.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:03.793 Initialization complete. Launching workers. 00:19:03.793 ======================================================== 00:19:03.793 Latency(us) 00:19:03.793 Device Information : IOPS MiB/s Average min max 00:19:03.793 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35156.25 137.33 3640.44 1104.46 8059.65 00:19:03.793 ======================================================== 00:19:03.793 Total : 35156.25 137.33 3640.44 1104.46 8059.65 00:19:03.793 00:19:03.793 [2024-11-06 10:11:06.851276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:03.794 10:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:03.794 [2024-11-06 10:11:07.059243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:09.090 [2024-11-06 10:11:12.195944] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:09.090 Initializing NVMe Controllers 00:19:09.090 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:09.090 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:09.090 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:09.090 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:09.090 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:09.090 Initialization complete. Launching workers. 00:19:09.090 Starting thread on core 2 00:19:09.090 Starting thread on core 3 00:19:09.090 Starting thread on core 1 00:19:09.090 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:09.090 [2024-11-06 10:11:12.489357] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:12.388 [2024-11-06 10:11:15.557539] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:12.388 Initializing NVMe Controllers 00:19:12.388 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:12.388 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:12.388 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:12.388 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:12.388 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:12.388 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:12.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:12.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:12.388 Initialization complete. Launching workers. 00:19:12.388 Starting thread on core 1 with urgent priority queue 00:19:12.388 Starting thread on core 2 with urgent priority queue 00:19:12.388 Starting thread on core 3 with urgent priority queue 00:19:12.388 Starting thread on core 0 with urgent priority queue 00:19:12.388 SPDK bdev Controller (SPDK2 ) core 0: 10686.00 IO/s 9.36 secs/100000 ios 00:19:12.388 SPDK bdev Controller (SPDK2 ) core 1: 11263.33 IO/s 8.88 secs/100000 ios 00:19:12.388 SPDK bdev Controller (SPDK2 ) core 2: 13689.00 IO/s 7.31 secs/100000 ios 00:19:12.388 SPDK bdev Controller (SPDK2 ) core 3: 10190.33 IO/s 9.81 secs/100000 ios 00:19:12.388 ======================================================== 00:19:12.388 00:19:12.388 10:11:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:12.388 [2024-11-06 10:11:15.862303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:12.388 Initializing NVMe Controllers 00:19:12.388 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:12.388 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:12.388 Namespace ID: 1 size: 0GB 00:19:12.388 Initialization complete. 00:19:12.388 INFO: using host memory buffer for IO 00:19:12.388 Hello world! 00:19:12.388 [2024-11-06 10:11:15.872381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:12.648 10:11:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:12.909 [2024-11-06 10:11:16.169822] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:13.850 Initializing NVMe Controllers 00:19:13.851 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:13.851 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:13.851 Initialization complete. Launching workers. 00:19:13.851 submit (in ns) avg, min, max = 9506.0, 3905.8, 4003035.0 00:19:13.851 complete (in ns) avg, min, max = 17103.0, 2370.8, 4040604.2 00:19:13.851 00:19:13.851 Submit histogram 00:19:13.851 ================ 00:19:13.851 Range in us Cumulative Count 00:19:13.851 3.893 - 3.920: 0.4967% ( 95) 00:19:13.851 3.920 - 3.947: 3.1685% ( 511) 00:19:13.851 3.947 - 3.973: 9.5577% ( 1222) 00:19:13.851 3.973 - 4.000: 19.5232% ( 1906) 00:19:13.851 4.000 - 4.027: 31.7892% ( 2346) 00:19:13.851 4.027 - 4.053: 43.8618% ( 2309) 00:19:13.851 4.053 - 4.080: 60.4622% ( 3175) 00:19:13.851 4.080 - 4.107: 76.6182% ( 3090) 00:19:13.851 4.107 - 4.133: 88.8738% ( 2344) 00:19:13.851 4.133 - 4.160: 94.9807% ( 1168) 00:19:13.851 4.160 - 4.187: 97.6576% ( 512) 00:19:13.851 4.187 - 4.213: 99.0641% ( 269) 00:19:13.851 4.213 - 4.240: 99.3464% ( 54) 00:19:13.851 4.240 - 4.267: 99.4144% ( 13) 00:19:13.851 4.267 - 4.293: 99.4353% ( 4) 00:19:13.851 4.293 - 4.320: 99.4406% ( 1) 00:19:13.851 4.347 - 4.373: 99.4510% ( 2) 00:19:13.851 4.533 - 4.560: 99.4562% ( 1) 00:19:13.851 4.747 - 4.773: 99.4615% ( 1) 00:19:13.851 4.773 - 4.800: 99.4667% ( 1) 00:19:13.851 4.880 - 4.907: 99.4719% ( 1) 00:19:13.851 5.040 - 5.067: 99.4772% ( 1) 00:19:13.851 5.093 - 5.120: 99.4824% ( 1) 00:19:13.851 5.307 - 5.333: 99.4876% ( 1) 00:19:13.851 5.387 - 5.413: 99.4928% ( 1) 00:19:13.851 5.707 - 5.733: 99.4981% ( 1) 00:19:13.851 5.893 - 5.920: 99.5033% ( 1) 00:19:13.851 6.000 - 6.027: 99.5138% ( 2) 00:19:13.851 6.027 - 6.053: 99.5190% ( 1) 00:19:13.851 6.053 - 6.080: 99.5242% ( 1) 00:19:13.851 6.080 - 6.107: 99.5294% ( 1) 00:19:13.851 6.160 - 6.187: 99.5347% ( 1) 00:19:13.851 6.187 - 6.213: 99.5399% ( 1) 00:19:13.851 6.240 - 6.267: 99.5451% ( 1) 00:19:13.851 6.267 - 6.293: 99.5556% ( 2) 00:19:13.851 6.293 - 6.320: 99.5608% ( 1) 00:19:13.851 6.347 - 6.373: 99.5660% ( 1) 00:19:13.851 6.373 - 6.400: 99.5713% ( 1) 00:19:13.851 6.453 - 6.480: 99.5765% ( 1) 00:19:13.851 6.533 - 6.560: 99.5817% ( 1) 00:19:13.851 6.587 - 6.613: 99.5869% ( 1) 00:19:13.851 6.667 - 6.693: 99.5922% ( 1) 00:19:13.851 6.747 - 6.773: 99.5974% ( 1) 00:19:13.851 6.827 - 6.880: 99.6026% ( 1) 00:19:13.851 7.040 - 7.093: 99.6079% ( 1) 00:19:13.851 7.253 - 7.307: 99.6183% ( 2) 00:19:13.851 7.413 - 7.467: 99.6235% ( 1) 00:19:13.851 7.467 - 7.520: 99.6288% ( 1) 00:19:13.851 7.520 - 7.573: 99.6392% ( 2) 00:19:13.851 7.733 - 7.787: 99.6445% ( 1) 00:19:13.851 7.787 - 7.840: 99.6549% ( 2) 00:19:13.851 7.947 - 8.000: 99.6601% ( 1) 00:19:13.851 8.000 - 8.053: 99.6654% ( 1) 00:19:13.851 8.053 - 8.107: 99.6758% ( 2) 00:19:13.851 8.107 - 8.160: 99.6915% ( 3) 00:19:13.851 8.160 - 8.213: 99.7072% ( 3) 00:19:13.851 8.213 - 8.267: 99.7177% ( 2) 00:19:13.851 8.320 - 8.373: 99.7281% ( 2) 00:19:13.851 8.373 - 8.427: 99.7386% ( 2) 00:19:13.851 8.427 - 8.480: 99.7438% ( 1) 00:19:13.851 8.480 - 8.533: 99.7543% ( 2) 00:19:13.851 8.533 - 8.587: 99.7699% ( 3) 00:19:13.851 8.587 - 8.640: 99.7752% ( 1) 00:19:13.851 8.640 - 8.693: 99.7856% ( 2) 00:19:13.851 8.693 - 8.747: 99.7909% ( 1) 00:19:13.851 8.747 - 8.800: 99.8013% ( 2) 00:19:13.851 8.800 - 8.853: 99.8118% ( 2) 00:19:13.851 8.853 - 8.907: 99.8170% ( 1) 00:19:13.851 8.960 - 9.013: 99.8222% ( 1) 00:19:13.851 9.120 - 9.173: 99.8275% ( 1) 00:19:13.851 9.173 - 9.227: 99.8327% ( 1) 00:19:13.851 9.333 - 9.387: 99.8379% ( 1) 00:19:13.851 10.080 - 10.133: 99.8431% ( 1) 00:19:13.851 14.187 - 14.293: 99.8484% ( 1) 00:19:13.851 15.893 - 16.000: 99.8536% ( 1) 00:19:13.851 16.640 - 16.747: 99.8588% ( 1) 00:19:13.851 17.280 - 17.387: 99.8641% ( 1) 00:19:13.851 3986.773 - 4014.080: 100.0000% ( 26) 00:19:13.851 00:19:13.851 Complete histogram 00:19:13.851 ================== 00:19:13.851 Ra[2024-11-06 10:11:17.262556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:13.851 nge in us Cumulative Count 00:19:13.851 2.360 - 2.373: 0.0052% ( 1) 00:19:13.851 2.373 - 2.387: 0.0261% ( 4) 00:19:13.851 2.387 - 2.400: 1.0771% ( 201) 00:19:13.851 2.400 - 2.413: 1.2026% ( 24) 00:19:13.851 2.413 - 2.427: 1.4117% ( 40) 00:19:13.851 2.427 - 2.440: 1.9555% ( 104) 00:19:13.851 2.440 - 2.453: 46.2093% ( 8464) 00:19:13.851 2.453 - 2.467: 58.8257% ( 2413) 00:19:13.851 2.467 - 2.480: 70.9662% ( 2322) 00:19:13.851 2.480 - 2.493: 77.9253% ( 1331) 00:19:13.851 2.493 - 2.507: 80.8533% ( 560) 00:19:13.851 2.507 - 2.520: 84.3721% ( 673) 00:19:13.851 2.520 - 2.533: 90.8972% ( 1248) 00:19:13.851 2.533 - 2.547: 95.0643% ( 797) 00:19:13.851 2.547 - 2.560: 97.1871% ( 406) 00:19:13.851 2.560 - 2.573: 98.4942% ( 250) 00:19:13.851 2.573 - 2.587: 99.0902% ( 114) 00:19:13.851 2.587 - 2.600: 99.3255% ( 45) 00:19:13.851 2.600 - 2.613: 99.3412% ( 3) 00:19:13.851 2.613 - 2.627: 99.3621% ( 4) 00:19:13.851 2.947 - 2.960: 99.3674% ( 1) 00:19:13.851 4.533 - 4.560: 99.3726% ( 1) 00:19:13.851 4.640 - 4.667: 99.3778% ( 1) 00:19:13.851 4.720 - 4.747: 99.3830% ( 1) 00:19:13.851 4.773 - 4.800: 99.3935% ( 2) 00:19:13.851 4.800 - 4.827: 99.3987% ( 1) 00:19:13.851 5.013 - 5.040: 99.4040% ( 1) 00:19:13.851 5.093 - 5.120: 99.4092% ( 1) 00:19:13.851 5.200 - 5.227: 99.4144% ( 1) 00:19:13.851 5.227 - 5.253: 99.4196% ( 1) 00:19:13.851 5.493 - 5.520: 99.4249% ( 1) 00:19:13.851 5.573 - 5.600: 99.4406% ( 3) 00:19:13.851 5.653 - 5.680: 99.4458% ( 1) 00:19:13.851 5.680 - 5.707: 99.4510% ( 1) 00:19:13.851 5.707 - 5.733: 99.4562% ( 1) 00:19:13.851 5.760 - 5.787: 99.4615% ( 1) 00:19:13.851 5.867 - 5.893: 99.4667% ( 1) 00:19:13.851 6.000 - 6.027: 99.4719% ( 1) 00:19:13.851 6.133 - 6.160: 99.4772% ( 1) 00:19:13.851 6.213 - 6.240: 99.4824% ( 1) 00:19:13.851 6.267 - 6.293: 99.4876% ( 1) 00:19:13.851 6.293 - 6.320: 99.4928% ( 1) 00:19:13.851 6.400 - 6.427: 99.4981% ( 1) 00:19:13.851 6.480 - 6.507: 99.5033% ( 1) 00:19:13.851 6.533 - 6.560: 99.5085% ( 1) 00:19:13.851 6.640 - 6.667: 99.5138% ( 1) 00:19:13.851 6.800 - 6.827: 99.5190% ( 1) 00:19:13.851 6.880 - 6.933: 99.5242% ( 1) 00:19:13.851 6.933 - 6.987: 99.5347% ( 2) 00:19:13.851 6.987 - 7.040: 99.5451% ( 2) 00:19:13.851 7.093 - 7.147: 99.5504% ( 1) 00:19:13.851 7.147 - 7.200: 99.5556% ( 1) 00:19:13.851 7.467 - 7.520: 99.5608% ( 1) 00:19:13.851 7.627 - 7.680: 99.5660% ( 1) 00:19:13.851 7.680 - 7.733: 99.5713% ( 1) 00:19:13.851 7.733 - 7.787: 99.5765% ( 1) 00:19:13.851 7.840 - 7.893: 99.5869% ( 2) 00:19:13.851 7.947 - 8.000: 99.5922% ( 1) 00:19:13.851 8.213 - 8.267: 99.5974% ( 1) 00:19:13.851 8.320 - 8.373: 99.6026% ( 1) 00:19:13.851 8.373 - 8.427: 99.6079% ( 1) 00:19:13.851 8.427 - 8.480: 99.6131% ( 1) 00:19:13.851 8.747 - 8.800: 99.6183% ( 1) 00:19:13.851 13.973 - 14.080: 99.6235% ( 1) 00:19:13.851 14.187 - 14.293: 99.6288% ( 1) 00:19:13.851 44.160 - 44.373: 99.6340% ( 1) 00:19:13.851 3822.933 - 3850.240: 99.6392% ( 1) 00:19:13.851 3986.773 - 4014.080: 99.9948% ( 68) 00:19:13.851 4014.080 - 4041.387: 100.0000% ( 1) 00:19:13.851 00:19:13.851 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:13.851 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:13.851 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:13.851 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:13.851 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:14.112 [ 00:19:14.112 { 00:19:14.112 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:14.112 "subtype": "Discovery", 00:19:14.112 "listen_addresses": [], 00:19:14.112 "allow_any_host": true, 00:19:14.112 "hosts": [] 00:19:14.112 }, 00:19:14.112 { 00:19:14.112 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:14.112 "subtype": "NVMe", 00:19:14.112 "listen_addresses": [ 00:19:14.112 { 00:19:14.112 "trtype": "VFIOUSER", 00:19:14.112 "adrfam": "IPv4", 00:19:14.112 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:14.112 "trsvcid": "0" 00:19:14.112 } 00:19:14.112 ], 00:19:14.112 "allow_any_host": true, 00:19:14.112 "hosts": [], 00:19:14.112 "serial_number": "SPDK1", 00:19:14.112 "model_number": "SPDK bdev Controller", 00:19:14.112 "max_namespaces": 32, 00:19:14.112 "min_cntlid": 1, 00:19:14.112 "max_cntlid": 65519, 00:19:14.112 "namespaces": [ 00:19:14.112 { 00:19:14.112 "nsid": 1, 00:19:14.112 "bdev_name": "Malloc1", 00:19:14.112 "name": "Malloc1", 00:19:14.112 "nguid": "7DB53B4E34B84DDBB7050DE04FCC72E0", 00:19:14.112 "uuid": "7db53b4e-34b8-4ddb-b705-0de04fcc72e0" 00:19:14.112 }, 00:19:14.112 { 00:19:14.112 "nsid": 2, 00:19:14.112 "bdev_name": "Malloc3", 00:19:14.112 "name": "Malloc3", 00:19:14.112 "nguid": "B7D9FC121F27493D87F27C03E4C3A6BC", 00:19:14.112 "uuid": "b7d9fc12-1f27-493d-87f2-7c03e4c3a6bc" 00:19:14.112 } 00:19:14.112 ] 00:19:14.112 }, 00:19:14.112 { 00:19:14.112 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:14.112 "subtype": "NVMe", 00:19:14.112 "listen_addresses": [ 00:19:14.112 { 00:19:14.112 "trtype": "VFIOUSER", 00:19:14.112 "adrfam": "IPv4", 00:19:14.112 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:14.112 "trsvcid": "0" 00:19:14.112 } 00:19:14.112 ], 00:19:14.112 "allow_any_host": true, 00:19:14.112 "hosts": [], 00:19:14.112 "serial_number": "SPDK2", 00:19:14.112 "model_number": "SPDK bdev Controller", 00:19:14.112 "max_namespaces": 32, 00:19:14.112 "min_cntlid": 1, 00:19:14.112 "max_cntlid": 65519, 00:19:14.112 "namespaces": [ 00:19:14.112 { 00:19:14.112 "nsid": 1, 00:19:14.112 "bdev_name": "Malloc2", 00:19:14.112 "name": "Malloc2", 00:19:14.112 "nguid": "4BB07E1DCCA7478A8F78548C2E2C040A", 00:19:14.112 "uuid": "4bb07e1d-cca7-478a-8f78-548c2e2c040a" 00:19:14.112 } 00:19:14.112 ] 00:19:14.112 } 00:19:14.112 ] 00:19:14.112 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:14.113 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3851239 00:19:14.113 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:14.113 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:19:14.113 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:14.113 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:14.113 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:14.113 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:19:14.113 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:14.113 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:14.373 Malloc4 00:19:14.373 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:14.373 [2024-11-06 10:11:17.702174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:14.373 [2024-11-06 10:11:17.836049] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:14.373 10:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:14.634 Asynchronous Event Request test 00:19:14.634 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:14.634 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:14.634 Registering asynchronous event callbacks... 00:19:14.634 Starting namespace attribute notice tests for all controllers... 00:19:14.634 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:14.634 aer_cb - Changed Namespace 00:19:14.634 Cleaning up... 00:19:14.634 [ 00:19:14.634 { 00:19:14.634 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:14.634 "subtype": "Discovery", 00:19:14.634 "listen_addresses": [], 00:19:14.634 "allow_any_host": true, 00:19:14.634 "hosts": [] 00:19:14.634 }, 00:19:14.634 { 00:19:14.634 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:14.634 "subtype": "NVMe", 00:19:14.634 "listen_addresses": [ 00:19:14.634 { 00:19:14.634 "trtype": "VFIOUSER", 00:19:14.634 "adrfam": "IPv4", 00:19:14.634 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:14.634 "trsvcid": "0" 00:19:14.634 } 00:19:14.634 ], 00:19:14.634 "allow_any_host": true, 00:19:14.634 "hosts": [], 00:19:14.634 "serial_number": "SPDK1", 00:19:14.634 "model_number": "SPDK bdev Controller", 00:19:14.634 "max_namespaces": 32, 00:19:14.634 "min_cntlid": 1, 00:19:14.634 "max_cntlid": 65519, 00:19:14.634 "namespaces": [ 00:19:14.634 { 00:19:14.634 "nsid": 1, 00:19:14.634 "bdev_name": "Malloc1", 00:19:14.634 "name": "Malloc1", 00:19:14.634 "nguid": "7DB53B4E34B84DDBB7050DE04FCC72E0", 00:19:14.634 "uuid": "7db53b4e-34b8-4ddb-b705-0de04fcc72e0" 00:19:14.634 }, 00:19:14.634 { 00:19:14.634 "nsid": 2, 00:19:14.634 "bdev_name": "Malloc3", 00:19:14.634 "name": "Malloc3", 00:19:14.634 "nguid": "B7D9FC121F27493D87F27C03E4C3A6BC", 00:19:14.634 "uuid": "b7d9fc12-1f27-493d-87f2-7c03e4c3a6bc" 00:19:14.634 } 00:19:14.634 ] 00:19:14.634 }, 00:19:14.634 { 00:19:14.634 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:14.634 "subtype": "NVMe", 00:19:14.634 "listen_addresses": [ 00:19:14.634 { 00:19:14.634 "trtype": "VFIOUSER", 00:19:14.634 "adrfam": "IPv4", 00:19:14.634 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:14.634 "trsvcid": "0" 00:19:14.634 } 00:19:14.634 ], 00:19:14.634 "allow_any_host": true, 00:19:14.634 "hosts": [], 00:19:14.634 "serial_number": "SPDK2", 00:19:14.634 "model_number": "SPDK bdev Controller", 00:19:14.634 "max_namespaces": 32, 00:19:14.634 "min_cntlid": 1, 00:19:14.634 "max_cntlid": 65519, 00:19:14.634 "namespaces": [ 00:19:14.634 { 00:19:14.634 "nsid": 1, 00:19:14.634 "bdev_name": "Malloc2", 00:19:14.634 "name": "Malloc2", 00:19:14.634 "nguid": "4BB07E1DCCA7478A8F78548C2E2C040A", 00:19:14.634 "uuid": "4bb07e1d-cca7-478a-8f78-548c2e2c040a" 00:19:14.634 }, 00:19:14.634 { 00:19:14.634 "nsid": 2, 00:19:14.634 "bdev_name": "Malloc4", 00:19:14.634 "name": "Malloc4", 00:19:14.634 "nguid": "08770169BF5E43CFBD20B3F046CE1D3C", 00:19:14.634 "uuid": "08770169-bf5e-43cf-bd20-b3f046ce1d3c" 00:19:14.634 } 00:19:14.634 ] 00:19:14.634 } 00:19:14.634 ] 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3851239 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3842150 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3842150 ']' 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3842150 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3842150 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3842150' 00:19:14.634 killing process with pid 3842150 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3842150 00:19:14.634 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3842150 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3851416 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3851416' 00:19:14.896 Process pid: 3851416 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3851416 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3851416 ']' 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:14.896 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:14.896 [2024-11-06 10:11:18.326961] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:14.896 [2024-11-06 10:11:18.327887] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:14.896 [2024-11-06 10:11:18.327928] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.156 [2024-11-06 10:11:18.410299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:15.156 [2024-11-06 10:11:18.446495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.156 [2024-11-06 10:11:18.446534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.156 [2024-11-06 10:11:18.446542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.156 [2024-11-06 10:11:18.446549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.156 [2024-11-06 10:11:18.446555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.156 [2024-11-06 10:11:18.448016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.156 [2024-11-06 10:11:18.448288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.156 [2024-11-06 10:11:18.448448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.156 [2024-11-06 10:11:18.448449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.156 [2024-11-06 10:11:18.503993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:15.156 [2024-11-06 10:11:18.504034] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:15.156 [2024-11-06 10:11:18.505035] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:15.156 [2024-11-06 10:11:18.505611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:15.156 [2024-11-06 10:11:18.505739] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:15.774 10:11:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.774 10:11:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:19:15.774 10:11:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:16.789 10:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:17.049 10:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:17.049 10:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:17.049 10:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:17.049 10:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:17.049 10:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:17.049 Malloc1 00:19:17.049 10:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:17.310 10:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:17.571 10:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:17.832 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:17.832 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:17.832 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:17.832 Malloc2 00:19:17.832 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:18.093 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:18.353 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:18.353 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:18.353 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3851416 00:19:18.353 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3851416 ']' 00:19:18.353 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3851416 00:19:18.353 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:19:18.353 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:18.353 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3851416 00:19:18.613 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:18.613 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:18.613 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3851416' 00:19:18.613 killing process with pid 3851416 00:19:18.613 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3851416 00:19:18.613 10:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3851416 00:19:18.613 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:18.613 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:18.613 00:19:18.613 real 0m51.404s 00:19:18.613 user 3m16.888s 00:19:18.613 sys 0m3.074s 00:19:18.613 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:18.613 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:18.613 ************************************ 00:19:18.613 END TEST nvmf_vfio_user 00:19:18.613 ************************************ 00:19:18.613 10:11:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:18.613 10:11:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:18.613 10:11:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:18.613 10:11:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:18.613 ************************************ 00:19:18.613 START TEST nvmf_vfio_user_nvme_compliance 00:19:18.613 ************************************ 00:19:18.613 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:18.903 * Looking for test storage... 00:19:18.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:18.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.903 --rc genhtml_branch_coverage=1 00:19:18.903 --rc genhtml_function_coverage=1 00:19:18.903 --rc genhtml_legend=1 00:19:18.903 --rc geninfo_all_blocks=1 00:19:18.903 --rc geninfo_unexecuted_blocks=1 00:19:18.903 00:19:18.903 ' 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:18.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.903 --rc genhtml_branch_coverage=1 00:19:18.903 --rc genhtml_function_coverage=1 00:19:18.903 --rc genhtml_legend=1 00:19:18.903 --rc geninfo_all_blocks=1 00:19:18.903 --rc geninfo_unexecuted_blocks=1 00:19:18.903 00:19:18.903 ' 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:18.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.903 --rc genhtml_branch_coverage=1 00:19:18.903 --rc genhtml_function_coverage=1 00:19:18.903 --rc genhtml_legend=1 00:19:18.903 --rc geninfo_all_blocks=1 00:19:18.903 --rc geninfo_unexecuted_blocks=1 00:19:18.903 00:19:18.903 ' 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:18.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.903 --rc genhtml_branch_coverage=1 00:19:18.903 --rc genhtml_function_coverage=1 00:19:18.903 --rc genhtml_legend=1 00:19:18.903 --rc geninfo_all_blocks=1 00:19:18.903 --rc geninfo_unexecuted_blocks=1 00:19:18.903 00:19:18.903 ' 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.903 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:18.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3852330 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3852330' 00:19:18.904 Process pid: 3852330 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3852330 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 3852330 ']' 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:18.904 10:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:18.904 [2024-11-06 10:11:22.385033] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:18.904 [2024-11-06 10:11:22.385102] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.164 [2024-11-06 10:11:22.468098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:19.164 [2024-11-06 10:11:22.509166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.164 [2024-11-06 10:11:22.509203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.164 [2024-11-06 10:11:22.509212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.164 [2024-11-06 10:11:22.509218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.164 [2024-11-06 10:11:22.509224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.164 [2024-11-06 10:11:22.510661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.164 [2024-11-06 10:11:22.510776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.164 [2024-11-06 10:11:22.510779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.738 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:19.738 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:19:19.738 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:21.124 malloc0 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:21.124 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.125 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:21.125 00:19:21.125 00:19:21.125 CUnit - A unit testing framework for C - Version 2.1-3 00:19:21.125 http://cunit.sourceforge.net/ 00:19:21.125 00:19:21.125 00:19:21.125 Suite: nvme_compliance 00:19:21.125 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-06 10:11:24.471321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:21.125 [2024-11-06 10:11:24.472675] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:21.125 [2024-11-06 10:11:24.472687] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:21.125 [2024-11-06 10:11:24.472692] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:21.125 [2024-11-06 10:11:24.474338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:21.125 passed 00:19:21.125 Test: admin_identify_ctrlr_verify_fused ...[2024-11-06 10:11:24.569953] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:21.125 [2024-11-06 10:11:24.572968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:21.125 passed 00:19:21.385 Test: admin_identify_ns ...[2024-11-06 10:11:24.667090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:21.385 [2024-11-06 10:11:24.730880] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:21.385 [2024-11-06 10:11:24.738877] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:21.385 [2024-11-06 10:11:24.759986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:21.385 passed 00:19:21.385 Test: admin_get_features_mandatory_features ...[2024-11-06 10:11:24.851597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:21.385 [2024-11-06 10:11:24.854616] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:21.645 passed 00:19:21.645 Test: admin_get_features_optional_features ...[2024-11-06 10:11:24.949155] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:21.645 [2024-11-06 10:11:24.952168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:21.645 passed 00:19:21.645 Test: admin_set_features_number_of_queues ...[2024-11-06 10:11:25.046115] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:21.905 [2024-11-06 10:11:25.150966] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:21.905 passed 00:19:21.905 Test: admin_get_log_page_mandatory_logs ...[2024-11-06 10:11:25.244979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:21.905 [2024-11-06 10:11:25.249005] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:21.905 passed 00:19:21.905 Test: admin_get_log_page_with_lpo ...[2024-11-06 10:11:25.342787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.165 [2024-11-06 10:11:25.409877] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:22.165 [2024-11-06 10:11:25.422930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.165 passed 00:19:22.165 Test: fabric_property_get ...[2024-11-06 10:11:25.514969] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.165 [2024-11-06 10:11:25.516218] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:22.165 [2024-11-06 10:11:25.517981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.165 passed 00:19:22.165 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-06 10:11:25.613776] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.165 [2024-11-06 10:11:25.615032] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:22.165 [2024-11-06 10:11:25.616792] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.165 passed 00:19:22.426 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-06 10:11:25.709914] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.426 [2024-11-06 10:11:25.793870] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:22.426 [2024-11-06 10:11:25.809867] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:22.426 [2024-11-06 10:11:25.814959] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.426 passed 00:19:22.426 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-06 10:11:25.908937] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.426 [2024-11-06 10:11:25.913193] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:22.426 [2024-11-06 10:11:25.914968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.686 passed 00:19:22.686 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-06 10:11:26.005089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.686 [2024-11-06 10:11:26.080871] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:22.686 [2024-11-06 10:11:26.104878] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:22.686 [2024-11-06 10:11:26.109947] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.686 passed 00:19:22.945 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-06 10:11:26.203949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.945 [2024-11-06 10:11:26.205197] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:22.945 [2024-11-06 10:11:26.205217] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:22.946 [2024-11-06 10:11:26.206965] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.946 passed 00:19:22.946 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-06 10:11:26.300097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.946 [2024-11-06 10:11:26.391867] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:22.946 [2024-11-06 10:11:26.399869] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:22.946 [2024-11-06 10:11:26.407870] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:22.946 [2024-11-06 10:11:26.415871] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:22.946 [2024-11-06 10:11:26.444950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:23.205 passed 00:19:23.205 Test: admin_create_io_sq_verify_pc ...[2024-11-06 10:11:26.537100] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:23.205 [2024-11-06 10:11:26.556876] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:23.205 [2024-11-06 10:11:26.574265] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:23.205 passed 00:19:23.205 Test: admin_create_io_qp_max_qps ...[2024-11-06 10:11:26.666754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:24.584 [2024-11-06 10:11:27.785874] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:24.845 [2024-11-06 10:11:28.163403] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:24.845 passed 00:19:24.845 Test: admin_create_io_sq_shared_cq ...[2024-11-06 10:11:28.255098] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.106 [2024-11-06 10:11:28.390870] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:25.106 [2024-11-06 10:11:28.427936] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.106 passed 00:19:25.106 00:19:25.106 Run Summary: Type Total Ran Passed Failed Inactive 00:19:25.106 suites 1 1 n/a 0 0 00:19:25.106 tests 18 18 18 0 0 00:19:25.106 asserts 360 360 360 0 n/a 00:19:25.106 00:19:25.106 Elapsed time = 1.662 seconds 00:19:25.106 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3852330 00:19:25.106 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 3852330 ']' 00:19:25.106 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 3852330 00:19:25.106 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:19:25.106 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:25.106 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3852330 00:19:25.106 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:25.106 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:25.106 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3852330' 00:19:25.106 killing process with pid 3852330 00:19:25.106 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 3852330 00:19:25.106 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 3852330 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:25.367 00:19:25.367 real 0m6.583s 00:19:25.367 user 0m18.651s 00:19:25.367 sys 0m0.569s 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:25.367 ************************************ 00:19:25.367 END TEST nvmf_vfio_user_nvme_compliance 00:19:25.367 ************************************ 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:25.367 ************************************ 00:19:25.367 START TEST nvmf_vfio_user_fuzz 00:19:25.367 ************************************ 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:25.367 * Looking for test storage... 00:19:25.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:19:25.367 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:25.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.627 --rc genhtml_branch_coverage=1 00:19:25.627 --rc genhtml_function_coverage=1 00:19:25.627 --rc genhtml_legend=1 00:19:25.627 --rc geninfo_all_blocks=1 00:19:25.627 --rc geninfo_unexecuted_blocks=1 00:19:25.627 00:19:25.627 ' 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:25.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.627 --rc genhtml_branch_coverage=1 00:19:25.627 --rc genhtml_function_coverage=1 00:19:25.627 --rc genhtml_legend=1 00:19:25.627 --rc geninfo_all_blocks=1 00:19:25.627 --rc geninfo_unexecuted_blocks=1 00:19:25.627 00:19:25.627 ' 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:25.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.627 --rc genhtml_branch_coverage=1 00:19:25.627 --rc genhtml_function_coverage=1 00:19:25.627 --rc genhtml_legend=1 00:19:25.627 --rc geninfo_all_blocks=1 00:19:25.627 --rc geninfo_unexecuted_blocks=1 00:19:25.627 00:19:25.627 ' 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:25.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.627 --rc genhtml_branch_coverage=1 00:19:25.627 --rc genhtml_function_coverage=1 00:19:25.627 --rc genhtml_legend=1 00:19:25.627 --rc geninfo_all_blocks=1 00:19:25.627 --rc geninfo_unexecuted_blocks=1 00:19:25.627 00:19:25.627 ' 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.627 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:25.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3853735 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3853735' 00:19:25.628 Process pid: 3853735 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3853735 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3853735 ']' 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:25.628 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:26.574 10:11:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:26.574 10:11:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:19:26.574 10:11:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.514 malloc0 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:27.514 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.515 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.515 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.515 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:27.515 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:59.621 Fuzzing completed. Shutting down the fuzz application 00:19:59.621 00:19:59.621 Dumping successful admin opcodes: 00:19:59.621 8, 9, 10, 24, 00:19:59.621 Dumping successful io opcodes: 00:19:59.621 0, 00:19:59.621 NS: 0x20000081ef00 I/O qp, Total commands completed: 1134382, total successful commands: 4467, random_seed: 3839689152 00:19:59.621 NS: 0x20000081ef00 admin qp, Total commands completed: 142623, total successful commands: 1159, random_seed: 1111515840 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3853735 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3853735 ']' 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 3853735 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3853735 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3853735' 00:19:59.621 killing process with pid 3853735 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 3853735 00:19:59.621 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 3853735 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:59.622 00:19:59.622 real 0m33.767s 00:19:59.622 user 0m37.886s 00:19:59.622 sys 0m26.787s 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:59.622 ************************************ 00:19:59.622 END TEST nvmf_vfio_user_fuzz 00:19:59.622 ************************************ 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:59.622 ************************************ 00:19:59.622 START TEST nvmf_auth_target 00:19:59.622 ************************************ 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:59.622 * Looking for test storage... 00:19:59.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:59.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.622 --rc genhtml_branch_coverage=1 00:19:59.622 --rc genhtml_function_coverage=1 00:19:59.622 --rc genhtml_legend=1 00:19:59.622 --rc geninfo_all_blocks=1 00:19:59.622 --rc geninfo_unexecuted_blocks=1 00:19:59.622 00:19:59.622 ' 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:59.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.622 --rc genhtml_branch_coverage=1 00:19:59.622 --rc genhtml_function_coverage=1 00:19:59.622 --rc genhtml_legend=1 00:19:59.622 --rc geninfo_all_blocks=1 00:19:59.622 --rc geninfo_unexecuted_blocks=1 00:19:59.622 00:19:59.622 ' 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:59.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.622 --rc genhtml_branch_coverage=1 00:19:59.622 --rc genhtml_function_coverage=1 00:19:59.622 --rc genhtml_legend=1 00:19:59.622 --rc geninfo_all_blocks=1 00:19:59.622 --rc geninfo_unexecuted_blocks=1 00:19:59.622 00:19:59.622 ' 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:59.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.622 --rc genhtml_branch_coverage=1 00:19:59.622 --rc genhtml_function_coverage=1 00:19:59.622 --rc genhtml_legend=1 00:19:59.622 --rc geninfo_all_blocks=1 00:19:59.622 --rc geninfo_unexecuted_blocks=1 00:19:59.622 00:19:59.622 ' 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.622 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:59.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:59.623 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:07.763 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:07.763 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:07.763 Found net devices under 0000:31:00.0: cvl_0_0 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.763 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:07.764 Found net devices under 0000:31:00.1: cvl_0_1 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.764 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.764 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.764 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.764 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:07.764 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.764 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.764 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.764 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:07.764 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:07.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:20:07.764 00:20:07.764 --- 10.0.0.2 ping statistics --- 00:20:07.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.764 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:20:07.764 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:20:08.025 00:20:08.025 --- 10.0.0.1 ping statistics --- 00:20:08.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.025 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3864961 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3864961 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3864961 ']' 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:08.025 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3865215 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0b99c2c097172aad2cda9eec3788ceb70f0bfc70584760e6 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.geb 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0b99c2c097172aad2cda9eec3788ceb70f0bfc70584760e6 0 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0b99c2c097172aad2cda9eec3788ceb70f0bfc70584760e6 0 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0b99c2c097172aad2cda9eec3788ceb70f0bfc70584760e6 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.geb 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.geb 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.geb 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8516bf731d168ad1ce8d7ece05faf5781fbbce396e99582c903be7bc36115fbd 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.b0o 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8516bf731d168ad1ce8d7ece05faf5781fbbce396e99582c903be7bc36115fbd 3 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8516bf731d168ad1ce8d7ece05faf5781fbbce396e99582c903be7bc36115fbd 3 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8516bf731d168ad1ce8d7ece05faf5781fbbce396e99582c903be7bc36115fbd 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.b0o 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.b0o 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.b0o 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a3ef7865ec2f2eab2a679183c06aaf9b 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.yZG 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a3ef7865ec2f2eab2a679183c06aaf9b 1 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a3ef7865ec2f2eab2a679183c06aaf9b 1 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a3ef7865ec2f2eab2a679183c06aaf9b 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.yZG 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.yZG 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.yZG 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f78ea131492ed871ab1ed45cf1a886c12f30795c47102113 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.C1Y 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f78ea131492ed871ab1ed45cf1a886c12f30795c47102113 2 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f78ea131492ed871ab1ed45cf1a886c12f30795c47102113 2 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f78ea131492ed871ab1ed45cf1a886c12f30795c47102113 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:08.968 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.C1Y 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.C1Y 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.C1Y 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=42422aa1ae3ad2047a21d2bf11f9986cf6c4b5019bdd3ba8 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.AIj 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 42422aa1ae3ad2047a21d2bf11f9986cf6c4b5019bdd3ba8 2 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 42422aa1ae3ad2047a21d2bf11f9986cf6c4b5019bdd3ba8 2 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=42422aa1ae3ad2047a21d2bf11f9986cf6c4b5019bdd3ba8 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.AIj 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.AIj 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.AIj 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=948515f0e93acd453614c33399ebabde 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.L8t 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 948515f0e93acd453614c33399ebabde 1 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 948515f0e93acd453614c33399ebabde 1 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=948515f0e93acd453614c33399ebabde 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.L8t 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.L8t 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.L8t 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=93ffe75d134b2a7aa07aeb69c5a4a644b099d9528eeb91f220e5a790c724671d 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dOY 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 93ffe75d134b2a7aa07aeb69c5a4a644b099d9528eeb91f220e5a790c724671d 3 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 93ffe75d134b2a7aa07aeb69c5a4a644b099d9528eeb91f220e5a790c724671d 3 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=93ffe75d134b2a7aa07aeb69c5a4a644b099d9528eeb91f220e5a790c724671d 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dOY 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dOY 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.dOY 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3864961 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3864961 ']' 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.230 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.491 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:09.491 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:09.491 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3865215 /var/tmp/host.sock 00:20:09.491 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3865215 ']' 00:20:09.491 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:20:09.491 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.491 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:09.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:09.491 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.491 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.geb 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.geb 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.geb 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.b0o ]] 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b0o 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.752 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.753 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.753 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b0o 00:20:09.753 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b0o 00:20:10.012 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:10.012 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.yZG 00:20:10.012 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.012 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.012 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.012 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.yZG 00:20:10.012 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.yZG 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.C1Y ]] 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.C1Y 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.C1Y 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.C1Y 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AIj 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.272 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.532 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.AIj 00:20:10.532 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.AIj 00:20:10.532 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.L8t ]] 00:20:10.532 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L8t 00:20:10.532 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.532 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.532 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.532 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L8t 00:20:10.532 10:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L8t 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dOY 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.dOY 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.dOY 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:10.792 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:11.051 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:11.051 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.051 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.051 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:11.052 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:11.052 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.052 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.052 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.052 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.052 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.052 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.052 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.052 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.311 00:20:11.311 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.311 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.311 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.571 { 00:20:11.571 "cntlid": 1, 00:20:11.571 "qid": 0, 00:20:11.571 "state": "enabled", 00:20:11.571 "thread": "nvmf_tgt_poll_group_000", 00:20:11.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:11.571 "listen_address": { 00:20:11.571 "trtype": "TCP", 00:20:11.571 "adrfam": "IPv4", 00:20:11.571 "traddr": "10.0.0.2", 00:20:11.571 "trsvcid": "4420" 00:20:11.571 }, 00:20:11.571 "peer_address": { 00:20:11.571 "trtype": "TCP", 00:20:11.571 "adrfam": "IPv4", 00:20:11.571 "traddr": "10.0.0.1", 00:20:11.571 "trsvcid": "41256" 00:20:11.571 }, 00:20:11.571 "auth": { 00:20:11.571 "state": "completed", 00:20:11.571 "digest": "sha256", 00:20:11.571 "dhgroup": "null" 00:20:11.571 } 00:20:11.571 } 00:20:11.571 ]' 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.571 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.831 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:11.831 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:12.401 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.401 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:12.401 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.401 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.669 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.669 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.669 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:12.669 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.669 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.929 00:20:12.929 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.929 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.929 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.187 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.187 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.187 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.187 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.187 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.187 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.187 { 00:20:13.187 "cntlid": 3, 00:20:13.187 "qid": 0, 00:20:13.187 "state": "enabled", 00:20:13.187 "thread": "nvmf_tgt_poll_group_000", 00:20:13.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:13.187 "listen_address": { 00:20:13.187 "trtype": "TCP", 00:20:13.187 "adrfam": "IPv4", 00:20:13.187 "traddr": "10.0.0.2", 00:20:13.187 "trsvcid": "4420" 00:20:13.187 }, 00:20:13.187 "peer_address": { 00:20:13.187 "trtype": "TCP", 00:20:13.187 "adrfam": "IPv4", 00:20:13.187 "traddr": "10.0.0.1", 00:20:13.187 "trsvcid": "41276" 00:20:13.187 }, 00:20:13.187 "auth": { 00:20:13.187 "state": "completed", 00:20:13.187 "digest": "sha256", 00:20:13.187 "dhgroup": "null" 00:20:13.187 } 00:20:13.187 } 00:20:13.187 ]' 00:20:13.187 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.188 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.188 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.188 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:13.188 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.188 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.188 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.188 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.999 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:13.999 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:14.257 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.257 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:14.257 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.257 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.257 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.257 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.257 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:14.257 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.517 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.517 00:20:14.776 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.776 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.777 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.777 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.777 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.777 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.777 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.777 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.777 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.777 { 00:20:14.777 "cntlid": 5, 00:20:14.777 "qid": 0, 00:20:14.777 "state": "enabled", 00:20:14.777 "thread": "nvmf_tgt_poll_group_000", 00:20:14.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:14.777 "listen_address": { 00:20:14.777 "trtype": "TCP", 00:20:14.777 "adrfam": "IPv4", 00:20:14.777 "traddr": "10.0.0.2", 00:20:14.777 "trsvcid": "4420" 00:20:14.777 }, 00:20:14.777 "peer_address": { 00:20:14.777 "trtype": "TCP", 00:20:14.777 "adrfam": "IPv4", 00:20:14.777 "traddr": "10.0.0.1", 00:20:14.777 "trsvcid": "41298" 00:20:14.777 }, 00:20:14.777 "auth": { 00:20:14.777 "state": "completed", 00:20:14.777 "digest": "sha256", 00:20:14.777 "dhgroup": "null" 00:20:14.777 } 00:20:14.777 } 00:20:14.777 ]' 00:20:14.777 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.777 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.777 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.036 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:15.036 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.036 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.036 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.036 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.036 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:15.036 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:15.974 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.974 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:15.974 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.974 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.974 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.974 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.974 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:15.974 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.235 00:20:16.235 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.497 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.497 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.497 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.497 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.497 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.497 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.497 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.497 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.497 { 00:20:16.497 "cntlid": 7, 00:20:16.497 "qid": 0, 00:20:16.497 "state": "enabled", 00:20:16.497 "thread": "nvmf_tgt_poll_group_000", 00:20:16.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:16.497 "listen_address": { 00:20:16.497 "trtype": "TCP", 00:20:16.497 "adrfam": "IPv4", 00:20:16.497 "traddr": "10.0.0.2", 00:20:16.497 "trsvcid": "4420" 00:20:16.497 }, 00:20:16.497 "peer_address": { 00:20:16.497 "trtype": "TCP", 00:20:16.497 "adrfam": "IPv4", 00:20:16.497 "traddr": "10.0.0.1", 00:20:16.497 "trsvcid": "41326" 00:20:16.497 }, 00:20:16.497 "auth": { 00:20:16.497 "state": "completed", 00:20:16.497 "digest": "sha256", 00:20:16.497 "dhgroup": "null" 00:20:16.497 } 00:20:16.497 } 00:20:16.497 ]' 00:20:16.497 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.497 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.497 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.780 10:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:16.780 10:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.780 10:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.780 10:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.780 10:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.780 10:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:16.780 10:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:17.759 10:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.759 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.043 00:20:18.043 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.043 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.043 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.304 { 00:20:18.304 "cntlid": 9, 00:20:18.304 "qid": 0, 00:20:18.304 "state": "enabled", 00:20:18.304 "thread": "nvmf_tgt_poll_group_000", 00:20:18.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:18.304 "listen_address": { 00:20:18.304 "trtype": "TCP", 00:20:18.304 "adrfam": "IPv4", 00:20:18.304 "traddr": "10.0.0.2", 00:20:18.304 "trsvcid": "4420" 00:20:18.304 }, 00:20:18.304 "peer_address": { 00:20:18.304 "trtype": "TCP", 00:20:18.304 "adrfam": "IPv4", 00:20:18.304 "traddr": "10.0.0.1", 00:20:18.304 "trsvcid": "41358" 00:20:18.304 }, 00:20:18.304 "auth": { 00:20:18.304 "state": "completed", 00:20:18.304 "digest": "sha256", 00:20:18.304 "dhgroup": "ffdhe2048" 00:20:18.304 } 00:20:18.304 } 00:20:18.304 ]' 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.304 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.565 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:18.565 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.507 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.508 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.508 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.508 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.508 10:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.767 00:20:19.767 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.767 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.767 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.028 { 00:20:20.028 "cntlid": 11, 00:20:20.028 "qid": 0, 00:20:20.028 "state": "enabled", 00:20:20.028 "thread": "nvmf_tgt_poll_group_000", 00:20:20.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:20.028 "listen_address": { 00:20:20.028 "trtype": "TCP", 00:20:20.028 "adrfam": "IPv4", 00:20:20.028 "traddr": "10.0.0.2", 00:20:20.028 "trsvcid": "4420" 00:20:20.028 }, 00:20:20.028 "peer_address": { 00:20:20.028 "trtype": "TCP", 00:20:20.028 "adrfam": "IPv4", 00:20:20.028 "traddr": "10.0.0.1", 00:20:20.028 "trsvcid": "43892" 00:20:20.028 }, 00:20:20.028 "auth": { 00:20:20.028 "state": "completed", 00:20:20.028 "digest": "sha256", 00:20:20.028 "dhgroup": "ffdhe2048" 00:20:20.028 } 00:20:20.028 } 00:20:20.028 ]' 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.028 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.289 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:20.289 10:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.232 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.514 00:20:21.514 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.514 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.514 10:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.775 { 00:20:21.775 "cntlid": 13, 00:20:21.775 "qid": 0, 00:20:21.775 "state": "enabled", 00:20:21.775 "thread": "nvmf_tgt_poll_group_000", 00:20:21.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:21.775 "listen_address": { 00:20:21.775 "trtype": "TCP", 00:20:21.775 "adrfam": "IPv4", 00:20:21.775 "traddr": "10.0.0.2", 00:20:21.775 "trsvcid": "4420" 00:20:21.775 }, 00:20:21.775 "peer_address": { 00:20:21.775 "trtype": "TCP", 00:20:21.775 "adrfam": "IPv4", 00:20:21.775 "traddr": "10.0.0.1", 00:20:21.775 "trsvcid": "43920" 00:20:21.775 }, 00:20:21.775 "auth": { 00:20:21.775 "state": "completed", 00:20:21.775 "digest": "sha256", 00:20:21.775 "dhgroup": "ffdhe2048" 00:20:21.775 } 00:20:21.775 } 00:20:21.775 ]' 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.775 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.036 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:22.036 10:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.977 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.237 00:20:23.237 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.237 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.237 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.237 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.497 { 00:20:23.497 "cntlid": 15, 00:20:23.497 "qid": 0, 00:20:23.497 "state": "enabled", 00:20:23.497 "thread": "nvmf_tgt_poll_group_000", 00:20:23.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:23.497 "listen_address": { 00:20:23.497 "trtype": "TCP", 00:20:23.497 "adrfam": "IPv4", 00:20:23.497 "traddr": "10.0.0.2", 00:20:23.497 "trsvcid": "4420" 00:20:23.497 }, 00:20:23.497 "peer_address": { 00:20:23.497 "trtype": "TCP", 00:20:23.497 "adrfam": "IPv4", 00:20:23.497 "traddr": "10.0.0.1", 00:20:23.497 "trsvcid": "43944" 00:20:23.497 }, 00:20:23.497 "auth": { 00:20:23.497 "state": "completed", 00:20:23.497 "digest": "sha256", 00:20:23.497 "dhgroup": "ffdhe2048" 00:20:23.497 } 00:20:23.497 } 00:20:23.497 ]' 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.497 10:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.758 10:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:23.758 10:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:24.327 10:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.587 10:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:24.587 10:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.587 10:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.587 10:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.587 10:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.587 10:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.587 10:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.587 10:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.587 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:24.587 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.587 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.587 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:24.587 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.587 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.587 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.587 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.587 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.587 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.587 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.588 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.588 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.848 00:20:24.848 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.848 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.848 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.108 { 00:20:25.108 "cntlid": 17, 00:20:25.108 "qid": 0, 00:20:25.108 "state": "enabled", 00:20:25.108 "thread": "nvmf_tgt_poll_group_000", 00:20:25.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:25.108 "listen_address": { 00:20:25.108 "trtype": "TCP", 00:20:25.108 "adrfam": "IPv4", 00:20:25.108 "traddr": "10.0.0.2", 00:20:25.108 "trsvcid": "4420" 00:20:25.108 }, 00:20:25.108 "peer_address": { 00:20:25.108 "trtype": "TCP", 00:20:25.108 "adrfam": "IPv4", 00:20:25.108 "traddr": "10.0.0.1", 00:20:25.108 "trsvcid": "43972" 00:20:25.108 }, 00:20:25.108 "auth": { 00:20:25.108 "state": "completed", 00:20:25.108 "digest": "sha256", 00:20:25.108 "dhgroup": "ffdhe3072" 00:20:25.108 } 00:20:25.108 } 00:20:25.108 ]' 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.108 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.369 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:25.369 10:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.308 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.568 00:20:26.568 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.568 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.568 10:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.828 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.828 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.828 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.828 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.828 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.828 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.828 { 00:20:26.828 "cntlid": 19, 00:20:26.828 "qid": 0, 00:20:26.828 "state": "enabled", 00:20:26.828 "thread": "nvmf_tgt_poll_group_000", 00:20:26.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:26.828 "listen_address": { 00:20:26.828 "trtype": "TCP", 00:20:26.828 "adrfam": "IPv4", 00:20:26.828 "traddr": "10.0.0.2", 00:20:26.828 "trsvcid": "4420" 00:20:26.828 }, 00:20:26.828 "peer_address": { 00:20:26.828 "trtype": "TCP", 00:20:26.828 "adrfam": "IPv4", 00:20:26.828 "traddr": "10.0.0.1", 00:20:26.828 "trsvcid": "44000" 00:20:26.828 }, 00:20:26.828 "auth": { 00:20:26.828 "state": "completed", 00:20:26.828 "digest": "sha256", 00:20:26.828 "dhgroup": "ffdhe3072" 00:20:26.828 } 00:20:26.828 } 00:20:26.828 ]' 00:20:26.828 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.828 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.828 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.828 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.829 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.829 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.829 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.829 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.089 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:27.089 10:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.028 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.288 00:20:28.288 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.288 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.288 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.548 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.548 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.548 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.548 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.548 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.548 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.548 { 00:20:28.548 "cntlid": 21, 00:20:28.548 "qid": 0, 00:20:28.548 "state": "enabled", 00:20:28.548 "thread": "nvmf_tgt_poll_group_000", 00:20:28.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:28.548 "listen_address": { 00:20:28.548 "trtype": "TCP", 00:20:28.548 "adrfam": "IPv4", 00:20:28.548 "traddr": "10.0.0.2", 00:20:28.548 "trsvcid": "4420" 00:20:28.548 }, 00:20:28.548 "peer_address": { 00:20:28.548 "trtype": "TCP", 00:20:28.548 "adrfam": "IPv4", 00:20:28.548 "traddr": "10.0.0.1", 00:20:28.548 "trsvcid": "44022" 00:20:28.548 }, 00:20:28.548 "auth": { 00:20:28.548 "state": "completed", 00:20:28.548 "digest": "sha256", 00:20:28.548 "dhgroup": "ffdhe3072" 00:20:28.548 } 00:20:28.548 } 00:20:28.548 ]' 00:20:28.548 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.548 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.548 10:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.549 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.549 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.549 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.549 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.549 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.809 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:28.809 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:29.752 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.752 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:29.752 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.752 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.752 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.752 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.752 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:29.752 10:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.752 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.013 00:20:30.013 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.013 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.013 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.275 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.275 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.275 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.275 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.275 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.275 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.275 { 00:20:30.275 "cntlid": 23, 00:20:30.275 "qid": 0, 00:20:30.275 "state": "enabled", 00:20:30.275 "thread": "nvmf_tgt_poll_group_000", 00:20:30.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:30.275 "listen_address": { 00:20:30.275 "trtype": "TCP", 00:20:30.275 "adrfam": "IPv4", 00:20:30.275 "traddr": "10.0.0.2", 00:20:30.275 "trsvcid": "4420" 00:20:30.275 }, 00:20:30.275 "peer_address": { 00:20:30.275 "trtype": "TCP", 00:20:30.275 "adrfam": "IPv4", 00:20:30.275 "traddr": "10.0.0.1", 00:20:30.275 "trsvcid": "49450" 00:20:30.275 }, 00:20:30.275 "auth": { 00:20:30.275 "state": "completed", 00:20:30.275 "digest": "sha256", 00:20:30.275 "dhgroup": "ffdhe3072" 00:20:30.275 } 00:20:30.275 } 00:20:30.275 ]' 00:20:30.275 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.275 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.275 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.275 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.275 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.537 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.537 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.537 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.537 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:30.537 10:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.479 10:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.740 00:20:31.740 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.740 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.740 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.999 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.999 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.999 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.999 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.999 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.999 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.999 { 00:20:31.999 "cntlid": 25, 00:20:31.999 "qid": 0, 00:20:31.999 "state": "enabled", 00:20:31.999 "thread": "nvmf_tgt_poll_group_000", 00:20:31.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:31.999 "listen_address": { 00:20:31.999 "trtype": "TCP", 00:20:31.999 "adrfam": "IPv4", 00:20:31.999 "traddr": "10.0.0.2", 00:20:31.999 "trsvcid": "4420" 00:20:31.999 }, 00:20:31.999 "peer_address": { 00:20:31.999 "trtype": "TCP", 00:20:31.999 "adrfam": "IPv4", 00:20:31.999 "traddr": "10.0.0.1", 00:20:31.999 "trsvcid": "49464" 00:20:31.999 }, 00:20:32.000 "auth": { 00:20:32.000 "state": "completed", 00:20:32.000 "digest": "sha256", 00:20:32.000 "dhgroup": "ffdhe4096" 00:20:32.000 } 00:20:32.000 } 00:20:32.000 ]' 00:20:32.000 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.000 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.000 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.000 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.000 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.259 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.259 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.259 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.259 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:32.260 10:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.201 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.462 00:20:33.462 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.462 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.462 10:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.723 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.723 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.723 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.723 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.723 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.723 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.723 { 00:20:33.723 "cntlid": 27, 00:20:33.723 "qid": 0, 00:20:33.723 "state": "enabled", 00:20:33.723 "thread": "nvmf_tgt_poll_group_000", 00:20:33.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:33.723 "listen_address": { 00:20:33.723 "trtype": "TCP", 00:20:33.723 "adrfam": "IPv4", 00:20:33.723 "traddr": "10.0.0.2", 00:20:33.723 "trsvcid": "4420" 00:20:33.723 }, 00:20:33.723 "peer_address": { 00:20:33.723 "trtype": "TCP", 00:20:33.723 "adrfam": "IPv4", 00:20:33.723 "traddr": "10.0.0.1", 00:20:33.723 "trsvcid": "49480" 00:20:33.723 }, 00:20:33.723 "auth": { 00:20:33.723 "state": "completed", 00:20:33.723 "digest": "sha256", 00:20:33.723 "dhgroup": "ffdhe4096" 00:20:33.723 } 00:20:33.723 } 00:20:33.723 ]' 00:20:33.723 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.723 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.723 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.723 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.723 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.983 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.983 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.983 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.983 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:33.983 10:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:34.925 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.925 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:34.925 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.925 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.925 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.925 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.925 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:34.925 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.186 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.448 00:20:35.448 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.448 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.448 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.448 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.710 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.710 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.710 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.710 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.710 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.710 { 00:20:35.710 "cntlid": 29, 00:20:35.710 "qid": 0, 00:20:35.710 "state": "enabled", 00:20:35.710 "thread": "nvmf_tgt_poll_group_000", 00:20:35.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:35.710 "listen_address": { 00:20:35.710 "trtype": "TCP", 00:20:35.710 "adrfam": "IPv4", 00:20:35.710 "traddr": "10.0.0.2", 00:20:35.710 "trsvcid": "4420" 00:20:35.710 }, 00:20:35.710 "peer_address": { 00:20:35.710 "trtype": "TCP", 00:20:35.710 "adrfam": "IPv4", 00:20:35.710 "traddr": "10.0.0.1", 00:20:35.710 "trsvcid": "49510" 00:20:35.710 }, 00:20:35.710 "auth": { 00:20:35.710 "state": "completed", 00:20:35.710 "digest": "sha256", 00:20:35.710 "dhgroup": "ffdhe4096" 00:20:35.710 } 00:20:35.710 } 00:20:35.710 ]' 00:20:35.710 10:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.710 10:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.710 10:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.710 10:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.710 10:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.710 10:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.710 10:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.710 10:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.970 10:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:35.970 10:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:36.540 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.801 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.062 00:20:37.062 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.062 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.062 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.323 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.323 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.323 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.323 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.323 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.323 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.323 { 00:20:37.323 "cntlid": 31, 00:20:37.323 "qid": 0, 00:20:37.323 "state": "enabled", 00:20:37.323 "thread": "nvmf_tgt_poll_group_000", 00:20:37.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:37.323 "listen_address": { 00:20:37.323 "trtype": "TCP", 00:20:37.323 "adrfam": "IPv4", 00:20:37.323 "traddr": "10.0.0.2", 00:20:37.323 "trsvcid": "4420" 00:20:37.323 }, 00:20:37.323 "peer_address": { 00:20:37.323 "trtype": "TCP", 00:20:37.323 "adrfam": "IPv4", 00:20:37.323 "traddr": "10.0.0.1", 00:20:37.323 "trsvcid": "49544" 00:20:37.323 }, 00:20:37.323 "auth": { 00:20:37.323 "state": "completed", 00:20:37.323 "digest": "sha256", 00:20:37.323 "dhgroup": "ffdhe4096" 00:20:37.323 } 00:20:37.323 } 00:20:37.323 ]' 00:20:37.323 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.323 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.323 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.323 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.323 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.584 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.584 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.584 10:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.584 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:37.584 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.527 10:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.527 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.527 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.527 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.527 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.097 00:20:39.097 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.097 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.097 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.097 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.097 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.097 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.097 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.097 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.097 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.097 { 00:20:39.097 "cntlid": 33, 00:20:39.097 "qid": 0, 00:20:39.097 "state": "enabled", 00:20:39.097 "thread": "nvmf_tgt_poll_group_000", 00:20:39.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:39.097 "listen_address": { 00:20:39.097 "trtype": "TCP", 00:20:39.097 "adrfam": "IPv4", 00:20:39.097 "traddr": "10.0.0.2", 00:20:39.097 "trsvcid": "4420" 00:20:39.097 }, 00:20:39.097 "peer_address": { 00:20:39.097 "trtype": "TCP", 00:20:39.097 "adrfam": "IPv4", 00:20:39.097 "traddr": "10.0.0.1", 00:20:39.097 "trsvcid": "43380" 00:20:39.097 }, 00:20:39.097 "auth": { 00:20:39.097 "state": "completed", 00:20:39.097 "digest": "sha256", 00:20:39.097 "dhgroup": "ffdhe6144" 00:20:39.097 } 00:20:39.097 } 00:20:39.097 ]' 00:20:39.097 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.358 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.358 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.358 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.358 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.358 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.358 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.358 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.619 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:39.619 10:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:40.189 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.189 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:40.189 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.189 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.189 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.189 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.189 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:40.189 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.449 10:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.709 00:20:40.709 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.709 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.709 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.969 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.969 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.969 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.969 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.969 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.969 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.969 { 00:20:40.969 "cntlid": 35, 00:20:40.969 "qid": 0, 00:20:40.969 "state": "enabled", 00:20:40.969 "thread": "nvmf_tgt_poll_group_000", 00:20:40.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:40.969 "listen_address": { 00:20:40.969 "trtype": "TCP", 00:20:40.969 "adrfam": "IPv4", 00:20:40.969 "traddr": "10.0.0.2", 00:20:40.969 "trsvcid": "4420" 00:20:40.969 }, 00:20:40.969 "peer_address": { 00:20:40.969 "trtype": "TCP", 00:20:40.969 "adrfam": "IPv4", 00:20:40.969 "traddr": "10.0.0.1", 00:20:40.969 "trsvcid": "43398" 00:20:40.969 }, 00:20:40.969 "auth": { 00:20:40.969 "state": "completed", 00:20:40.969 "digest": "sha256", 00:20:40.969 "dhgroup": "ffdhe6144" 00:20:40.969 } 00:20:40.969 } 00:20:40.969 ]' 00:20:40.969 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.969 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.969 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.970 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.970 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.230 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.230 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.230 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.230 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:41.230 10:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.169 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.170 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.170 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.170 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.170 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.170 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.170 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.170 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.739 00:20:42.739 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.739 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.739 10:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.739 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.739 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.739 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.739 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.739 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.739 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.739 { 00:20:42.739 "cntlid": 37, 00:20:42.739 "qid": 0, 00:20:42.739 "state": "enabled", 00:20:42.739 "thread": "nvmf_tgt_poll_group_000", 00:20:42.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:42.739 "listen_address": { 00:20:42.739 "trtype": "TCP", 00:20:42.739 "adrfam": "IPv4", 00:20:42.739 "traddr": "10.0.0.2", 00:20:42.739 "trsvcid": "4420" 00:20:42.739 }, 00:20:42.739 "peer_address": { 00:20:42.739 "trtype": "TCP", 00:20:42.739 "adrfam": "IPv4", 00:20:42.739 "traddr": "10.0.0.1", 00:20:42.739 "trsvcid": "43414" 00:20:42.739 }, 00:20:42.739 "auth": { 00:20:42.739 "state": "completed", 00:20:42.739 "digest": "sha256", 00:20:42.739 "dhgroup": "ffdhe6144" 00:20:42.739 } 00:20:42.739 } 00:20:42.739 ]' 00:20:42.739 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.739 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.739 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.999 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.999 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.999 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.999 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.999 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.999 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:42.999 10:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.939 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.199 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.199 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.199 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.199 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.458 00:20:44.458 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.458 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.458 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.718 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.718 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.718 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.718 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.718 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.718 10:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.718 { 00:20:44.718 "cntlid": 39, 00:20:44.718 "qid": 0, 00:20:44.718 "state": "enabled", 00:20:44.718 "thread": "nvmf_tgt_poll_group_000", 00:20:44.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:44.718 "listen_address": { 00:20:44.718 "trtype": "TCP", 00:20:44.718 "adrfam": "IPv4", 00:20:44.718 "traddr": "10.0.0.2", 00:20:44.718 "trsvcid": "4420" 00:20:44.718 }, 00:20:44.718 "peer_address": { 00:20:44.718 "trtype": "TCP", 00:20:44.718 "adrfam": "IPv4", 00:20:44.718 "traddr": "10.0.0.1", 00:20:44.718 "trsvcid": "43448" 00:20:44.718 }, 00:20:44.718 "auth": { 00:20:44.718 "state": "completed", 00:20:44.718 "digest": "sha256", 00:20:44.718 "dhgroup": "ffdhe6144" 00:20:44.718 } 00:20:44.718 } 00:20:44.718 ]' 00:20:44.718 10:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.718 10:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.718 10:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.718 10:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.719 10:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.719 10:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.719 10:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.719 10:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.979 10:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:44.979 10:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.919 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.491 00:20:46.491 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.491 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.491 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.491 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.491 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.491 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.491 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.491 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.491 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.491 { 00:20:46.491 "cntlid": 41, 00:20:46.491 "qid": 0, 00:20:46.491 "state": "enabled", 00:20:46.491 "thread": "nvmf_tgt_poll_group_000", 00:20:46.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:46.491 "listen_address": { 00:20:46.491 "trtype": "TCP", 00:20:46.491 "adrfam": "IPv4", 00:20:46.491 "traddr": "10.0.0.2", 00:20:46.491 "trsvcid": "4420" 00:20:46.491 }, 00:20:46.491 "peer_address": { 00:20:46.491 "trtype": "TCP", 00:20:46.491 "adrfam": "IPv4", 00:20:46.491 "traddr": "10.0.0.1", 00:20:46.491 "trsvcid": "43472" 00:20:46.491 }, 00:20:46.491 "auth": { 00:20:46.491 "state": "completed", 00:20:46.491 "digest": "sha256", 00:20:46.491 "dhgroup": "ffdhe8192" 00:20:46.491 } 00:20:46.491 } 00:20:46.491 ]' 00:20:46.752 10:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.752 10:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.752 10:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.752 10:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.752 10:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.752 10:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.752 10:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.752 10:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.012 10:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:47.012 10:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:47.662 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.662 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:47.662 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.662 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.662 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.662 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.662 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:47.662 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:47.938 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:47.938 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.938 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:47.938 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:47.938 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.938 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.938 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.938 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.938 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.938 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.939 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.939 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.939 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.508 00:20:48.508 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.508 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.508 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.508 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.508 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.508 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.508 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.508 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.508 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.508 { 00:20:48.508 "cntlid": 43, 00:20:48.508 "qid": 0, 00:20:48.508 "state": "enabled", 00:20:48.508 "thread": "nvmf_tgt_poll_group_000", 00:20:48.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:48.508 "listen_address": { 00:20:48.508 "trtype": "TCP", 00:20:48.508 "adrfam": "IPv4", 00:20:48.508 "traddr": "10.0.0.2", 00:20:48.508 "trsvcid": "4420" 00:20:48.508 }, 00:20:48.508 "peer_address": { 00:20:48.508 "trtype": "TCP", 00:20:48.508 "adrfam": "IPv4", 00:20:48.508 "traddr": "10.0.0.1", 00:20:48.508 "trsvcid": "43494" 00:20:48.508 }, 00:20:48.508 "auth": { 00:20:48.508 "state": "completed", 00:20:48.508 "digest": "sha256", 00:20:48.508 "dhgroup": "ffdhe8192" 00:20:48.508 } 00:20:48.508 } 00:20:48.508 ]' 00:20:48.508 10:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.768 10:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.768 10:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.768 10:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.768 10:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.768 10:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.768 10:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.768 10:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.136 10:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:49.136 10:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:49.707 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.707 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:49.707 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.707 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.707 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.707 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.707 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.707 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.968 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.538 00:20:50.538 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.538 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.538 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.538 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.538 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.538 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.538 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.538 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.538 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.538 { 00:20:50.538 "cntlid": 45, 00:20:50.538 "qid": 0, 00:20:50.538 "state": "enabled", 00:20:50.538 "thread": "nvmf_tgt_poll_group_000", 00:20:50.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:50.538 "listen_address": { 00:20:50.538 "trtype": "TCP", 00:20:50.538 "adrfam": "IPv4", 00:20:50.538 "traddr": "10.0.0.2", 00:20:50.538 "trsvcid": "4420" 00:20:50.538 }, 00:20:50.538 "peer_address": { 00:20:50.538 "trtype": "TCP", 00:20:50.538 "adrfam": "IPv4", 00:20:50.538 "traddr": "10.0.0.1", 00:20:50.538 "trsvcid": "58688" 00:20:50.538 }, 00:20:50.538 "auth": { 00:20:50.538 "state": "completed", 00:20:50.538 "digest": "sha256", 00:20:50.538 "dhgroup": "ffdhe8192" 00:20:50.538 } 00:20:50.538 } 00:20:50.538 ]' 00:20:50.538 10:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.538 10:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.538 10:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.797 10:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.797 10:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.797 10:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.797 10:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.797 10:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.797 10:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:50.797 10:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:51.736 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.736 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:51.736 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.736 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.736 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.736 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.736 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:51.736 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:51.736 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:51.736 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.996 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:51.996 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:51.996 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:51.996 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.996 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:51.996 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.996 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.996 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.996 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:51.996 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.996 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.566 00:20:52.566 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.566 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.566 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.566 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.566 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.566 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.566 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.566 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.566 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.566 { 00:20:52.566 "cntlid": 47, 00:20:52.566 "qid": 0, 00:20:52.566 "state": "enabled", 00:20:52.566 "thread": "nvmf_tgt_poll_group_000", 00:20:52.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:52.566 "listen_address": { 00:20:52.566 "trtype": "TCP", 00:20:52.566 "adrfam": "IPv4", 00:20:52.566 "traddr": "10.0.0.2", 00:20:52.566 "trsvcid": "4420" 00:20:52.566 }, 00:20:52.566 "peer_address": { 00:20:52.566 "trtype": "TCP", 00:20:52.566 "adrfam": "IPv4", 00:20:52.566 "traddr": "10.0.0.1", 00:20:52.566 "trsvcid": "58714" 00:20:52.566 }, 00:20:52.566 "auth": { 00:20:52.566 "state": "completed", 00:20:52.566 "digest": "sha256", 00:20:52.566 "dhgroup": "ffdhe8192" 00:20:52.566 } 00:20:52.566 } 00:20:52.566 ]' 00:20:52.566 10:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.566 10:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.566 10:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.826 10:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.826 10:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.826 10:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.826 10:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.826 10:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.826 10:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:52.826 10:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.766 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.027 00:20:54.027 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.027 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.027 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.287 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.287 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.287 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.287 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.287 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.287 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.287 { 00:20:54.287 "cntlid": 49, 00:20:54.287 "qid": 0, 00:20:54.287 "state": "enabled", 00:20:54.287 "thread": "nvmf_tgt_poll_group_000", 00:20:54.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:54.287 "listen_address": { 00:20:54.287 "trtype": "TCP", 00:20:54.287 "adrfam": "IPv4", 00:20:54.287 "traddr": "10.0.0.2", 00:20:54.287 "trsvcid": "4420" 00:20:54.287 }, 00:20:54.287 "peer_address": { 00:20:54.287 "trtype": "TCP", 00:20:54.287 "adrfam": "IPv4", 00:20:54.287 "traddr": "10.0.0.1", 00:20:54.287 "trsvcid": "58724" 00:20:54.287 }, 00:20:54.287 "auth": { 00:20:54.287 "state": "completed", 00:20:54.287 "digest": "sha384", 00:20:54.287 "dhgroup": "null" 00:20:54.287 } 00:20:54.287 } 00:20:54.287 ]' 00:20:54.287 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.287 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.287 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.287 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:54.556 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.556 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.556 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.556 10:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.556 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:54.556 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.526 10:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.788 00:20:55.788 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.788 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.788 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.049 { 00:20:56.049 "cntlid": 51, 00:20:56.049 "qid": 0, 00:20:56.049 "state": "enabled", 00:20:56.049 "thread": "nvmf_tgt_poll_group_000", 00:20:56.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:56.049 "listen_address": { 00:20:56.049 "trtype": "TCP", 00:20:56.049 "adrfam": "IPv4", 00:20:56.049 "traddr": "10.0.0.2", 00:20:56.049 "trsvcid": "4420" 00:20:56.049 }, 00:20:56.049 "peer_address": { 00:20:56.049 "trtype": "TCP", 00:20:56.049 "adrfam": "IPv4", 00:20:56.049 "traddr": "10.0.0.1", 00:20:56.049 "trsvcid": "58762" 00:20:56.049 }, 00:20:56.049 "auth": { 00:20:56.049 "state": "completed", 00:20:56.049 "digest": "sha384", 00:20:56.049 "dhgroup": "null" 00:20:56.049 } 00:20:56.049 } 00:20:56.049 ]' 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.049 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.311 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:56.311 10:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.258 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.518 00:20:57.518 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.518 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.518 10:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.779 { 00:20:57.779 "cntlid": 53, 00:20:57.779 "qid": 0, 00:20:57.779 "state": "enabled", 00:20:57.779 "thread": "nvmf_tgt_poll_group_000", 00:20:57.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:57.779 "listen_address": { 00:20:57.779 "trtype": "TCP", 00:20:57.779 "adrfam": "IPv4", 00:20:57.779 "traddr": "10.0.0.2", 00:20:57.779 "trsvcid": "4420" 00:20:57.779 }, 00:20:57.779 "peer_address": { 00:20:57.779 "trtype": "TCP", 00:20:57.779 "adrfam": "IPv4", 00:20:57.779 "traddr": "10.0.0.1", 00:20:57.779 "trsvcid": "58798" 00:20:57.779 }, 00:20:57.779 "auth": { 00:20:57.779 "state": "completed", 00:20:57.779 "digest": "sha384", 00:20:57.779 "dhgroup": "null" 00:20:57.779 } 00:20:57.779 } 00:20:57.779 ]' 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.779 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.040 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:58.040 10:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.981 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.242 00:20:59.242 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.242 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.242 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.502 { 00:20:59.502 "cntlid": 55, 00:20:59.502 "qid": 0, 00:20:59.502 "state": "enabled", 00:20:59.502 "thread": "nvmf_tgt_poll_group_000", 00:20:59.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:59.502 "listen_address": { 00:20:59.502 "trtype": "TCP", 00:20:59.502 "adrfam": "IPv4", 00:20:59.502 "traddr": "10.0.0.2", 00:20:59.502 "trsvcid": "4420" 00:20:59.502 }, 00:20:59.502 "peer_address": { 00:20:59.502 "trtype": "TCP", 00:20:59.502 "adrfam": "IPv4", 00:20:59.502 "traddr": "10.0.0.1", 00:20:59.502 "trsvcid": "35246" 00:20:59.502 }, 00:20:59.502 "auth": { 00:20:59.502 "state": "completed", 00:20:59.502 "digest": "sha384", 00:20:59.502 "dhgroup": "null" 00:20:59.502 } 00:20:59.502 } 00:20:59.502 ]' 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.502 10:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.763 10:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:20:59.763 10:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:00.702 10:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.703 10:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:00.703 10:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.703 10:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.703 10:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.703 10:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.703 10:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.703 10:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:00.703 10:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.703 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.963 00:21:00.963 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.963 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.963 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.224 { 00:21:01.224 "cntlid": 57, 00:21:01.224 "qid": 0, 00:21:01.224 "state": "enabled", 00:21:01.224 "thread": "nvmf_tgt_poll_group_000", 00:21:01.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:01.224 "listen_address": { 00:21:01.224 "trtype": "TCP", 00:21:01.224 "adrfam": "IPv4", 00:21:01.224 "traddr": "10.0.0.2", 00:21:01.224 "trsvcid": "4420" 00:21:01.224 }, 00:21:01.224 "peer_address": { 00:21:01.224 "trtype": "TCP", 00:21:01.224 "adrfam": "IPv4", 00:21:01.224 "traddr": "10.0.0.1", 00:21:01.224 "trsvcid": "35274" 00:21:01.224 }, 00:21:01.224 "auth": { 00:21:01.224 "state": "completed", 00:21:01.224 "digest": "sha384", 00:21:01.224 "dhgroup": "ffdhe2048" 00:21:01.224 } 00:21:01.224 } 00:21:01.224 ]' 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.224 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.485 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:01.485 10:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.428 10:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.689 00:21:02.689 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.689 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.689 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.689 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.689 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.689 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.689 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.950 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.950 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.950 { 00:21:02.950 "cntlid": 59, 00:21:02.950 "qid": 0, 00:21:02.950 "state": "enabled", 00:21:02.950 "thread": "nvmf_tgt_poll_group_000", 00:21:02.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:02.950 "listen_address": { 00:21:02.950 "trtype": "TCP", 00:21:02.950 "adrfam": "IPv4", 00:21:02.950 "traddr": "10.0.0.2", 00:21:02.950 "trsvcid": "4420" 00:21:02.950 }, 00:21:02.950 "peer_address": { 00:21:02.950 "trtype": "TCP", 00:21:02.950 "adrfam": "IPv4", 00:21:02.950 "traddr": "10.0.0.1", 00:21:02.950 "trsvcid": "35286" 00:21:02.950 }, 00:21:02.950 "auth": { 00:21:02.950 "state": "completed", 00:21:02.950 "digest": "sha384", 00:21:02.950 "dhgroup": "ffdhe2048" 00:21:02.950 } 00:21:02.950 } 00:21:02.950 ]' 00:21:02.950 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.950 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.950 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.950 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.950 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.950 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.950 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.950 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.211 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:03.211 10:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:03.781 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.781 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:03.781 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.781 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.781 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.781 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.781 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:03.781 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.042 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.303 00:21:04.303 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.303 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.303 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.563 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.563 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.563 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.563 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.563 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.563 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.563 { 00:21:04.563 "cntlid": 61, 00:21:04.563 "qid": 0, 00:21:04.563 "state": "enabled", 00:21:04.563 "thread": "nvmf_tgt_poll_group_000", 00:21:04.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:04.563 "listen_address": { 00:21:04.563 "trtype": "TCP", 00:21:04.563 "adrfam": "IPv4", 00:21:04.563 "traddr": "10.0.0.2", 00:21:04.563 "trsvcid": "4420" 00:21:04.563 }, 00:21:04.563 "peer_address": { 00:21:04.563 "trtype": "TCP", 00:21:04.563 "adrfam": "IPv4", 00:21:04.563 "traddr": "10.0.0.1", 00:21:04.563 "trsvcid": "35322" 00:21:04.563 }, 00:21:04.563 "auth": { 00:21:04.563 "state": "completed", 00:21:04.563 "digest": "sha384", 00:21:04.563 "dhgroup": "ffdhe2048" 00:21:04.563 } 00:21:04.563 } 00:21:04.563 ]' 00:21:04.563 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.563 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.563 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.563 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.563 10:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.563 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.563 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.563 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.822 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:04.822 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:05.763 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.763 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:05.763 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.763 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.763 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.763 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.763 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:05.763 10:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.763 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.023 00:21:06.023 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.023 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.023 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.283 { 00:21:06.283 "cntlid": 63, 00:21:06.283 "qid": 0, 00:21:06.283 "state": "enabled", 00:21:06.283 "thread": "nvmf_tgt_poll_group_000", 00:21:06.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:06.283 "listen_address": { 00:21:06.283 "trtype": "TCP", 00:21:06.283 "adrfam": "IPv4", 00:21:06.283 "traddr": "10.0.0.2", 00:21:06.283 "trsvcid": "4420" 00:21:06.283 }, 00:21:06.283 "peer_address": { 00:21:06.283 "trtype": "TCP", 00:21:06.283 "adrfam": "IPv4", 00:21:06.283 "traddr": "10.0.0.1", 00:21:06.283 "trsvcid": "35344" 00:21:06.283 }, 00:21:06.283 "auth": { 00:21:06.283 "state": "completed", 00:21:06.283 "digest": "sha384", 00:21:06.283 "dhgroup": "ffdhe2048" 00:21:06.283 } 00:21:06.283 } 00:21:06.283 ]' 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.283 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.542 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:06.542 10:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.519 10:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.779 00:21:07.779 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.779 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.779 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.779 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.779 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.779 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.779 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.038 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.038 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.038 { 00:21:08.038 "cntlid": 65, 00:21:08.038 "qid": 0, 00:21:08.038 "state": "enabled", 00:21:08.038 "thread": "nvmf_tgt_poll_group_000", 00:21:08.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:08.038 "listen_address": { 00:21:08.038 "trtype": "TCP", 00:21:08.038 "adrfam": "IPv4", 00:21:08.038 "traddr": "10.0.0.2", 00:21:08.038 "trsvcid": "4420" 00:21:08.038 }, 00:21:08.038 "peer_address": { 00:21:08.038 "trtype": "TCP", 00:21:08.038 "adrfam": "IPv4", 00:21:08.038 "traddr": "10.0.0.1", 00:21:08.038 "trsvcid": "35360" 00:21:08.038 }, 00:21:08.038 "auth": { 00:21:08.038 "state": "completed", 00:21:08.039 "digest": "sha384", 00:21:08.039 "dhgroup": "ffdhe3072" 00:21:08.039 } 00:21:08.039 } 00:21:08.039 ]' 00:21:08.039 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.039 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.039 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.039 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.039 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.039 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.039 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.039 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.298 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:08.298 10:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:08.867 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.867 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:08.867 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.867 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.867 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.867 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.867 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:08.867 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.126 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.386 00:21:09.386 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.386 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.386 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.646 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.646 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.646 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.646 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.646 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.646 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.646 { 00:21:09.646 "cntlid": 67, 00:21:09.646 "qid": 0, 00:21:09.646 "state": "enabled", 00:21:09.646 "thread": "nvmf_tgt_poll_group_000", 00:21:09.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:09.646 "listen_address": { 00:21:09.646 "trtype": "TCP", 00:21:09.646 "adrfam": "IPv4", 00:21:09.646 "traddr": "10.0.0.2", 00:21:09.646 "trsvcid": "4420" 00:21:09.646 }, 00:21:09.646 "peer_address": { 00:21:09.646 "trtype": "TCP", 00:21:09.646 "adrfam": "IPv4", 00:21:09.646 "traddr": "10.0.0.1", 00:21:09.646 "trsvcid": "56140" 00:21:09.646 }, 00:21:09.646 "auth": { 00:21:09.646 "state": "completed", 00:21:09.646 "digest": "sha384", 00:21:09.646 "dhgroup": "ffdhe3072" 00:21:09.646 } 00:21:09.646 } 00:21:09.646 ]' 00:21:09.646 10:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.646 10:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.646 10:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.646 10:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.646 10:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.646 10:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.646 10:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.646 10:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.906 10:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:09.906 10:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.846 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.106 00:21:11.106 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.106 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.106 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.368 { 00:21:11.368 "cntlid": 69, 00:21:11.368 "qid": 0, 00:21:11.368 "state": "enabled", 00:21:11.368 "thread": "nvmf_tgt_poll_group_000", 00:21:11.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:11.368 "listen_address": { 00:21:11.368 "trtype": "TCP", 00:21:11.368 "adrfam": "IPv4", 00:21:11.368 "traddr": "10.0.0.2", 00:21:11.368 "trsvcid": "4420" 00:21:11.368 }, 00:21:11.368 "peer_address": { 00:21:11.368 "trtype": "TCP", 00:21:11.368 "adrfam": "IPv4", 00:21:11.368 "traddr": "10.0.0.1", 00:21:11.368 "trsvcid": "56164" 00:21:11.368 }, 00:21:11.368 "auth": { 00:21:11.368 "state": "completed", 00:21:11.368 "digest": "sha384", 00:21:11.368 "dhgroup": "ffdhe3072" 00:21:11.368 } 00:21:11.368 } 00:21:11.368 ]' 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.368 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.629 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:11.629 10:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:12.569 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.569 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:12.569 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.569 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.569 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.569 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.569 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:12.569 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:12.569 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.570 10:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.830 00:21:12.830 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.830 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.830 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.091 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.091 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.091 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.091 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.091 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.091 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.091 { 00:21:13.091 "cntlid": 71, 00:21:13.091 "qid": 0, 00:21:13.091 "state": "enabled", 00:21:13.091 "thread": "nvmf_tgt_poll_group_000", 00:21:13.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:13.091 "listen_address": { 00:21:13.091 "trtype": "TCP", 00:21:13.091 "adrfam": "IPv4", 00:21:13.091 "traddr": "10.0.0.2", 00:21:13.091 "trsvcid": "4420" 00:21:13.091 }, 00:21:13.091 "peer_address": { 00:21:13.091 "trtype": "TCP", 00:21:13.091 "adrfam": "IPv4", 00:21:13.091 "traddr": "10.0.0.1", 00:21:13.091 "trsvcid": "56198" 00:21:13.091 }, 00:21:13.091 "auth": { 00:21:13.091 "state": "completed", 00:21:13.091 "digest": "sha384", 00:21:13.091 "dhgroup": "ffdhe3072" 00:21:13.091 } 00:21:13.091 } 00:21:13.091 ]' 00:21:13.091 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.091 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.091 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.092 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.092 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.092 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.092 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.092 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.352 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:13.352 10:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:13.925 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.187 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.448 00:21:14.448 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.448 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.448 10:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.708 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.708 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.708 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.708 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.708 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.708 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.708 { 00:21:14.708 "cntlid": 73, 00:21:14.708 "qid": 0, 00:21:14.708 "state": "enabled", 00:21:14.708 "thread": "nvmf_tgt_poll_group_000", 00:21:14.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:14.708 "listen_address": { 00:21:14.708 "trtype": "TCP", 00:21:14.708 "adrfam": "IPv4", 00:21:14.708 "traddr": "10.0.0.2", 00:21:14.708 "trsvcid": "4420" 00:21:14.708 }, 00:21:14.708 "peer_address": { 00:21:14.708 "trtype": "TCP", 00:21:14.708 "adrfam": "IPv4", 00:21:14.708 "traddr": "10.0.0.1", 00:21:14.708 "trsvcid": "56220" 00:21:14.708 }, 00:21:14.708 "auth": { 00:21:14.708 "state": "completed", 00:21:14.708 "digest": "sha384", 00:21:14.708 "dhgroup": "ffdhe4096" 00:21:14.708 } 00:21:14.708 } 00:21:14.708 ]' 00:21:14.708 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.708 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.708 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.708 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.708 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.968 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.968 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.968 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.968 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:14.968 10:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.910 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.171 00:21:16.171 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.171 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.171 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.430 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.430 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.430 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.430 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.430 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.430 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.430 { 00:21:16.430 "cntlid": 75, 00:21:16.430 "qid": 0, 00:21:16.430 "state": "enabled", 00:21:16.430 "thread": "nvmf_tgt_poll_group_000", 00:21:16.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:16.430 "listen_address": { 00:21:16.430 "trtype": "TCP", 00:21:16.430 "adrfam": "IPv4", 00:21:16.430 "traddr": "10.0.0.2", 00:21:16.430 "trsvcid": "4420" 00:21:16.430 }, 00:21:16.430 "peer_address": { 00:21:16.430 "trtype": "TCP", 00:21:16.430 "adrfam": "IPv4", 00:21:16.430 "traddr": "10.0.0.1", 00:21:16.430 "trsvcid": "56260" 00:21:16.430 }, 00:21:16.430 "auth": { 00:21:16.430 "state": "completed", 00:21:16.430 "digest": "sha384", 00:21:16.430 "dhgroup": "ffdhe4096" 00:21:16.430 } 00:21:16.430 } 00:21:16.430 ]' 00:21:16.430 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.430 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.430 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.431 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.431 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.691 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.691 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.691 10:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.691 10:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:16.691 10:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:17.630 10:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.630 10:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:17.630 10:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.630 10:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.630 10:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.630 10:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.630 10:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.630 10:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.630 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.891 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.151 { 00:21:18.151 "cntlid": 77, 00:21:18.151 "qid": 0, 00:21:18.151 "state": "enabled", 00:21:18.151 "thread": "nvmf_tgt_poll_group_000", 00:21:18.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:18.151 "listen_address": { 00:21:18.151 "trtype": "TCP", 00:21:18.151 "adrfam": "IPv4", 00:21:18.151 "traddr": "10.0.0.2", 00:21:18.151 "trsvcid": "4420" 00:21:18.151 }, 00:21:18.151 "peer_address": { 00:21:18.151 "trtype": "TCP", 00:21:18.151 "adrfam": "IPv4", 00:21:18.151 "traddr": "10.0.0.1", 00:21:18.151 "trsvcid": "56284" 00:21:18.151 }, 00:21:18.151 "auth": { 00:21:18.151 "state": "completed", 00:21:18.151 "digest": "sha384", 00:21:18.151 "dhgroup": "ffdhe4096" 00:21:18.151 } 00:21:18.151 } 00:21:18.151 ]' 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.151 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.411 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.411 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.411 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.411 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.411 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.411 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:18.411 10:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:19.349 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.349 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:19.349 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.349 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.349 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.349 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.349 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:19.349 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.608 10:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.867 00:21:19.867 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.867 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.867 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.867 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.867 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.867 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.867 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.867 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.867 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.867 { 00:21:19.867 "cntlid": 79, 00:21:19.867 "qid": 0, 00:21:19.867 "state": "enabled", 00:21:19.867 "thread": "nvmf_tgt_poll_group_000", 00:21:19.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:19.867 "listen_address": { 00:21:19.867 "trtype": "TCP", 00:21:19.867 "adrfam": "IPv4", 00:21:19.867 "traddr": "10.0.0.2", 00:21:19.867 "trsvcid": "4420" 00:21:19.867 }, 00:21:19.867 "peer_address": { 00:21:19.867 "trtype": "TCP", 00:21:19.867 "adrfam": "IPv4", 00:21:19.867 "traddr": "10.0.0.1", 00:21:19.867 "trsvcid": "48440" 00:21:19.867 }, 00:21:19.867 "auth": { 00:21:19.867 "state": "completed", 00:21:19.867 "digest": "sha384", 00:21:19.867 "dhgroup": "ffdhe4096" 00:21:19.867 } 00:21:19.867 } 00:21:19.867 ]' 00:21:19.868 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.127 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.127 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.127 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.127 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.127 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.127 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.127 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.387 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:20.387 10:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:20.958 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.958 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.958 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.958 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.958 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.958 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.958 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.958 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.958 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.218 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.478 00:21:21.479 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.479 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.479 10:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.738 { 00:21:21.738 "cntlid": 81, 00:21:21.738 "qid": 0, 00:21:21.738 "state": "enabled", 00:21:21.738 "thread": "nvmf_tgt_poll_group_000", 00:21:21.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:21.738 "listen_address": { 00:21:21.738 "trtype": "TCP", 00:21:21.738 "adrfam": "IPv4", 00:21:21.738 "traddr": "10.0.0.2", 00:21:21.738 "trsvcid": "4420" 00:21:21.738 }, 00:21:21.738 "peer_address": { 00:21:21.738 "trtype": "TCP", 00:21:21.738 "adrfam": "IPv4", 00:21:21.738 "traddr": "10.0.0.1", 00:21:21.738 "trsvcid": "48466" 00:21:21.738 }, 00:21:21.738 "auth": { 00:21:21.738 "state": "completed", 00:21:21.738 "digest": "sha384", 00:21:21.738 "dhgroup": "ffdhe6144" 00:21:21.738 } 00:21:21.738 } 00:21:21.738 ]' 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.738 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.999 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:21.999 10:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.939 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.199 00:21:23.199 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.199 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.199 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.459 { 00:21:23.459 "cntlid": 83, 00:21:23.459 "qid": 0, 00:21:23.459 "state": "enabled", 00:21:23.459 "thread": "nvmf_tgt_poll_group_000", 00:21:23.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:23.459 "listen_address": { 00:21:23.459 "trtype": "TCP", 00:21:23.459 "adrfam": "IPv4", 00:21:23.459 "traddr": "10.0.0.2", 00:21:23.459 "trsvcid": "4420" 00:21:23.459 }, 00:21:23.459 "peer_address": { 00:21:23.459 "trtype": "TCP", 00:21:23.459 "adrfam": "IPv4", 00:21:23.459 "traddr": "10.0.0.1", 00:21:23.459 "trsvcid": "48484" 00:21:23.459 }, 00:21:23.459 "auth": { 00:21:23.459 "state": "completed", 00:21:23.459 "digest": "sha384", 00:21:23.459 "dhgroup": "ffdhe6144" 00:21:23.459 } 00:21:23.459 } 00:21:23.459 ]' 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.459 10:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.719 10:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:23.719 10:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:24.658 10:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.659 10:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:24.659 10:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.659 10:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.659 10:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.659 10:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.659 10:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:24.659 10:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.659 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.228 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.228 { 00:21:25.228 "cntlid": 85, 00:21:25.228 "qid": 0, 00:21:25.228 "state": "enabled", 00:21:25.228 "thread": "nvmf_tgt_poll_group_000", 00:21:25.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:25.228 "listen_address": { 00:21:25.228 "trtype": "TCP", 00:21:25.228 "adrfam": "IPv4", 00:21:25.228 "traddr": "10.0.0.2", 00:21:25.228 "trsvcid": "4420" 00:21:25.228 }, 00:21:25.228 "peer_address": { 00:21:25.228 "trtype": "TCP", 00:21:25.228 "adrfam": "IPv4", 00:21:25.228 "traddr": "10.0.0.1", 00:21:25.228 "trsvcid": "48510" 00:21:25.228 }, 00:21:25.228 "auth": { 00:21:25.228 "state": "completed", 00:21:25.228 "digest": "sha384", 00:21:25.228 "dhgroup": "ffdhe6144" 00:21:25.228 } 00:21:25.228 } 00:21:25.228 ]' 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.228 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.488 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.488 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.488 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.488 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.488 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.488 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:25.488 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:26.425 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.425 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.425 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.425 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.425 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.425 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.425 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.425 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.685 10:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.945 00:21:26.945 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.945 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.945 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.204 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.204 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.204 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.204 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.204 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.204 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.204 { 00:21:27.204 "cntlid": 87, 00:21:27.204 "qid": 0, 00:21:27.204 "state": "enabled", 00:21:27.204 "thread": "nvmf_tgt_poll_group_000", 00:21:27.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:27.204 "listen_address": { 00:21:27.204 "trtype": "TCP", 00:21:27.204 "adrfam": "IPv4", 00:21:27.204 "traddr": "10.0.0.2", 00:21:27.204 "trsvcid": "4420" 00:21:27.204 }, 00:21:27.204 "peer_address": { 00:21:27.204 "trtype": "TCP", 00:21:27.204 "adrfam": "IPv4", 00:21:27.204 "traddr": "10.0.0.1", 00:21:27.204 "trsvcid": "48550" 00:21:27.204 }, 00:21:27.204 "auth": { 00:21:27.204 "state": "completed", 00:21:27.204 "digest": "sha384", 00:21:27.204 "dhgroup": "ffdhe6144" 00:21:27.204 } 00:21:27.204 } 00:21:27.204 ]' 00:21:27.204 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.204 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.204 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.205 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.205 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.205 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.205 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.205 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.465 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:27.465 10:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.404 10:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.974 00:21:28.974 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.974 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.974 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.234 { 00:21:29.234 "cntlid": 89, 00:21:29.234 "qid": 0, 00:21:29.234 "state": "enabled", 00:21:29.234 "thread": "nvmf_tgt_poll_group_000", 00:21:29.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:29.234 "listen_address": { 00:21:29.234 "trtype": "TCP", 00:21:29.234 "adrfam": "IPv4", 00:21:29.234 "traddr": "10.0.0.2", 00:21:29.234 "trsvcid": "4420" 00:21:29.234 }, 00:21:29.234 "peer_address": { 00:21:29.234 "trtype": "TCP", 00:21:29.234 "adrfam": "IPv4", 00:21:29.234 "traddr": "10.0.0.1", 00:21:29.234 "trsvcid": "37692" 00:21:29.234 }, 00:21:29.234 "auth": { 00:21:29.234 "state": "completed", 00:21:29.234 "digest": "sha384", 00:21:29.234 "dhgroup": "ffdhe8192" 00:21:29.234 } 00:21:29.234 } 00:21:29.234 ]' 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.234 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.493 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:29.493 10:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.434 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.003 00:21:31.003 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.003 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.004 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.263 { 00:21:31.263 "cntlid": 91, 00:21:31.263 "qid": 0, 00:21:31.263 "state": "enabled", 00:21:31.263 "thread": "nvmf_tgt_poll_group_000", 00:21:31.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:31.263 "listen_address": { 00:21:31.263 "trtype": "TCP", 00:21:31.263 "adrfam": "IPv4", 00:21:31.263 "traddr": "10.0.0.2", 00:21:31.263 "trsvcid": "4420" 00:21:31.263 }, 00:21:31.263 "peer_address": { 00:21:31.263 "trtype": "TCP", 00:21:31.263 "adrfam": "IPv4", 00:21:31.263 "traddr": "10.0.0.1", 00:21:31.263 "trsvcid": "37712" 00:21:31.263 }, 00:21:31.263 "auth": { 00:21:31.263 "state": "completed", 00:21:31.263 "digest": "sha384", 00:21:31.263 "dhgroup": "ffdhe8192" 00:21:31.263 } 00:21:31.263 } 00:21:31.263 ]' 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.263 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.264 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.264 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.523 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:31.523 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.464 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.034 00:21:33.034 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.034 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.034 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.034 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.294 { 00:21:33.294 "cntlid": 93, 00:21:33.294 "qid": 0, 00:21:33.294 "state": "enabled", 00:21:33.294 "thread": "nvmf_tgt_poll_group_000", 00:21:33.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:33.294 "listen_address": { 00:21:33.294 "trtype": "TCP", 00:21:33.294 "adrfam": "IPv4", 00:21:33.294 "traddr": "10.0.0.2", 00:21:33.294 "trsvcid": "4420" 00:21:33.294 }, 00:21:33.294 "peer_address": { 00:21:33.294 "trtype": "TCP", 00:21:33.294 "adrfam": "IPv4", 00:21:33.294 "traddr": "10.0.0.1", 00:21:33.294 "trsvcid": "37746" 00:21:33.294 }, 00:21:33.294 "auth": { 00:21:33.294 "state": "completed", 00:21:33.294 "digest": "sha384", 00:21:33.294 "dhgroup": "ffdhe8192" 00:21:33.294 } 00:21:33.294 } 00:21:33.294 ]' 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.294 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.555 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:33.555 10:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:34.126 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.386 10:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.956 00:21:34.956 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.956 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.956 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.217 { 00:21:35.217 "cntlid": 95, 00:21:35.217 "qid": 0, 00:21:35.217 "state": "enabled", 00:21:35.217 "thread": "nvmf_tgt_poll_group_000", 00:21:35.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:35.217 "listen_address": { 00:21:35.217 "trtype": "TCP", 00:21:35.217 "adrfam": "IPv4", 00:21:35.217 "traddr": "10.0.0.2", 00:21:35.217 "trsvcid": "4420" 00:21:35.217 }, 00:21:35.217 "peer_address": { 00:21:35.217 "trtype": "TCP", 00:21:35.217 "adrfam": "IPv4", 00:21:35.217 "traddr": "10.0.0.1", 00:21:35.217 "trsvcid": "37782" 00:21:35.217 }, 00:21:35.217 "auth": { 00:21:35.217 "state": "completed", 00:21:35.217 "digest": "sha384", 00:21:35.217 "dhgroup": "ffdhe8192" 00:21:35.217 } 00:21:35.217 } 00:21:35.217 ]' 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.217 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.478 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:35.478 10:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.417 10:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.678 00:21:36.678 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.678 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.679 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.940 { 00:21:36.940 "cntlid": 97, 00:21:36.940 "qid": 0, 00:21:36.940 "state": "enabled", 00:21:36.940 "thread": "nvmf_tgt_poll_group_000", 00:21:36.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:36.940 "listen_address": { 00:21:36.940 "trtype": "TCP", 00:21:36.940 "adrfam": "IPv4", 00:21:36.940 "traddr": "10.0.0.2", 00:21:36.940 "trsvcid": "4420" 00:21:36.940 }, 00:21:36.940 "peer_address": { 00:21:36.940 "trtype": "TCP", 00:21:36.940 "adrfam": "IPv4", 00:21:36.940 "traddr": "10.0.0.1", 00:21:36.940 "trsvcid": "37798" 00:21:36.940 }, 00:21:36.940 "auth": { 00:21:36.940 "state": "completed", 00:21:36.940 "digest": "sha512", 00:21:36.940 "dhgroup": "null" 00:21:36.940 } 00:21:36.940 } 00:21:36.940 ]' 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.940 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.201 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:37.201 10:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.143 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.404 00:21:38.404 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.404 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.404 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.404 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.404 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.404 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.404 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.404 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.404 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.404 { 00:21:38.404 "cntlid": 99, 00:21:38.404 "qid": 0, 00:21:38.404 "state": "enabled", 00:21:38.404 "thread": "nvmf_tgt_poll_group_000", 00:21:38.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:38.404 "listen_address": { 00:21:38.404 "trtype": "TCP", 00:21:38.404 "adrfam": "IPv4", 00:21:38.404 "traddr": "10.0.0.2", 00:21:38.404 "trsvcid": "4420" 00:21:38.404 }, 00:21:38.404 "peer_address": { 00:21:38.404 "trtype": "TCP", 00:21:38.404 "adrfam": "IPv4", 00:21:38.404 "traddr": "10.0.0.1", 00:21:38.404 "trsvcid": "37816" 00:21:38.404 }, 00:21:38.404 "auth": { 00:21:38.404 "state": "completed", 00:21:38.404 "digest": "sha512", 00:21:38.404 "dhgroup": "null" 00:21:38.404 } 00:21:38.404 } 00:21:38.404 ]' 00:21:38.404 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.665 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.665 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.665 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:38.665 10:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.665 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.665 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.665 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.925 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:38.925 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:39.497 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.497 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:39.497 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.497 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.497 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.497 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.497 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.497 10:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.757 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.017 00:21:40.017 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.017 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.017 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.278 { 00:21:40.278 "cntlid": 101, 00:21:40.278 "qid": 0, 00:21:40.278 "state": "enabled", 00:21:40.278 "thread": "nvmf_tgt_poll_group_000", 00:21:40.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:40.278 "listen_address": { 00:21:40.278 "trtype": "TCP", 00:21:40.278 "adrfam": "IPv4", 00:21:40.278 "traddr": "10.0.0.2", 00:21:40.278 "trsvcid": "4420" 00:21:40.278 }, 00:21:40.278 "peer_address": { 00:21:40.278 "trtype": "TCP", 00:21:40.278 "adrfam": "IPv4", 00:21:40.278 "traddr": "10.0.0.1", 00:21:40.278 "trsvcid": "36002" 00:21:40.278 }, 00:21:40.278 "auth": { 00:21:40.278 "state": "completed", 00:21:40.278 "digest": "sha512", 00:21:40.278 "dhgroup": "null" 00:21:40.278 } 00:21:40.278 } 00:21:40.278 ]' 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.278 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.539 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:40.539 10:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.481 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.482 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.482 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:41.482 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.482 10:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.742 00:21:41.742 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.742 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.742 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.002 { 00:21:42.002 "cntlid": 103, 00:21:42.002 "qid": 0, 00:21:42.002 "state": "enabled", 00:21:42.002 "thread": "nvmf_tgt_poll_group_000", 00:21:42.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:42.002 "listen_address": { 00:21:42.002 "trtype": "TCP", 00:21:42.002 "adrfam": "IPv4", 00:21:42.002 "traddr": "10.0.0.2", 00:21:42.002 "trsvcid": "4420" 00:21:42.002 }, 00:21:42.002 "peer_address": { 00:21:42.002 "trtype": "TCP", 00:21:42.002 "adrfam": "IPv4", 00:21:42.002 "traddr": "10.0.0.1", 00:21:42.002 "trsvcid": "36032" 00:21:42.002 }, 00:21:42.002 "auth": { 00:21:42.002 "state": "completed", 00:21:42.002 "digest": "sha512", 00:21:42.002 "dhgroup": "null" 00:21:42.002 } 00:21:42.002 } 00:21:42.002 ]' 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.002 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.003 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.003 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.263 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:42.263 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.204 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.464 00:21:43.464 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.464 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.464 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.724 { 00:21:43.724 "cntlid": 105, 00:21:43.724 "qid": 0, 00:21:43.724 "state": "enabled", 00:21:43.724 "thread": "nvmf_tgt_poll_group_000", 00:21:43.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:43.724 "listen_address": { 00:21:43.724 "trtype": "TCP", 00:21:43.724 "adrfam": "IPv4", 00:21:43.724 "traddr": "10.0.0.2", 00:21:43.724 "trsvcid": "4420" 00:21:43.724 }, 00:21:43.724 "peer_address": { 00:21:43.724 "trtype": "TCP", 00:21:43.724 "adrfam": "IPv4", 00:21:43.724 "traddr": "10.0.0.1", 00:21:43.724 "trsvcid": "36042" 00:21:43.724 }, 00:21:43.724 "auth": { 00:21:43.724 "state": "completed", 00:21:43.724 "digest": "sha512", 00:21:43.724 "dhgroup": "ffdhe2048" 00:21:43.724 } 00:21:43.724 } 00:21:43.724 ]' 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.724 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.985 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:43.985 10:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.927 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.188 00:21:45.188 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.188 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.188 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.448 { 00:21:45.448 "cntlid": 107, 00:21:45.448 "qid": 0, 00:21:45.448 "state": "enabled", 00:21:45.448 "thread": "nvmf_tgt_poll_group_000", 00:21:45.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:45.448 "listen_address": { 00:21:45.448 "trtype": "TCP", 00:21:45.448 "adrfam": "IPv4", 00:21:45.448 "traddr": "10.0.0.2", 00:21:45.448 "trsvcid": "4420" 00:21:45.448 }, 00:21:45.448 "peer_address": { 00:21:45.448 "trtype": "TCP", 00:21:45.448 "adrfam": "IPv4", 00:21:45.448 "traddr": "10.0.0.1", 00:21:45.448 "trsvcid": "36074" 00:21:45.448 }, 00:21:45.448 "auth": { 00:21:45.448 "state": "completed", 00:21:45.448 "digest": "sha512", 00:21:45.448 "dhgroup": "ffdhe2048" 00:21:45.448 } 00:21:45.448 } 00:21:45.448 ]' 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.448 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.709 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:45.709 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:46.657 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.657 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:46.657 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.657 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.657 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.657 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.657 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:46.657 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.657 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.918 00:21:46.918 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.918 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.918 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.178 { 00:21:47.178 "cntlid": 109, 00:21:47.178 "qid": 0, 00:21:47.178 "state": "enabled", 00:21:47.178 "thread": "nvmf_tgt_poll_group_000", 00:21:47.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:47.178 "listen_address": { 00:21:47.178 "trtype": "TCP", 00:21:47.178 "adrfam": "IPv4", 00:21:47.178 "traddr": "10.0.0.2", 00:21:47.178 "trsvcid": "4420" 00:21:47.178 }, 00:21:47.178 "peer_address": { 00:21:47.178 "trtype": "TCP", 00:21:47.178 "adrfam": "IPv4", 00:21:47.178 "traddr": "10.0.0.1", 00:21:47.178 "trsvcid": "36092" 00:21:47.178 }, 00:21:47.178 "auth": { 00:21:47.178 "state": "completed", 00:21:47.178 "digest": "sha512", 00:21:47.178 "dhgroup": "ffdhe2048" 00:21:47.178 } 00:21:47.178 } 00:21:47.178 ]' 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.178 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.438 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:47.438 10:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.379 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.639 00:21:48.639 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.639 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.639 10:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.898 { 00:21:48.898 "cntlid": 111, 00:21:48.898 "qid": 0, 00:21:48.898 "state": "enabled", 00:21:48.898 "thread": "nvmf_tgt_poll_group_000", 00:21:48.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:48.898 "listen_address": { 00:21:48.898 "trtype": "TCP", 00:21:48.898 "adrfam": "IPv4", 00:21:48.898 "traddr": "10.0.0.2", 00:21:48.898 "trsvcid": "4420" 00:21:48.898 }, 00:21:48.898 "peer_address": { 00:21:48.898 "trtype": "TCP", 00:21:48.898 "adrfam": "IPv4", 00:21:48.898 "traddr": "10.0.0.1", 00:21:48.898 "trsvcid": "49178" 00:21:48.898 }, 00:21:48.898 "auth": { 00:21:48.898 "state": "completed", 00:21:48.898 "digest": "sha512", 00:21:48.898 "dhgroup": "ffdhe2048" 00:21:48.898 } 00:21:48.898 } 00:21:48.898 ]' 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.898 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.159 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:49.159 10:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:49.729 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.729 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:49.989 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.989 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.989 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.989 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.989 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.989 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.989 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.990 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.250 00:21:50.250 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.250 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.250 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.510 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.510 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.510 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.510 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.510 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.510 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.510 { 00:21:50.510 "cntlid": 113, 00:21:50.510 "qid": 0, 00:21:50.510 "state": "enabled", 00:21:50.510 "thread": "nvmf_tgt_poll_group_000", 00:21:50.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:50.510 "listen_address": { 00:21:50.510 "trtype": "TCP", 00:21:50.510 "adrfam": "IPv4", 00:21:50.510 "traddr": "10.0.0.2", 00:21:50.510 "trsvcid": "4420" 00:21:50.510 }, 00:21:50.510 "peer_address": { 00:21:50.510 "trtype": "TCP", 00:21:50.510 "adrfam": "IPv4", 00:21:50.510 "traddr": "10.0.0.1", 00:21:50.510 "trsvcid": "49208" 00:21:50.510 }, 00:21:50.510 "auth": { 00:21:50.510 "state": "completed", 00:21:50.510 "digest": "sha512", 00:21:50.510 "dhgroup": "ffdhe3072" 00:21:50.510 } 00:21:50.510 } 00:21:50.510 ]' 00:21:50.510 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.510 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.510 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.510 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:50.510 10:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.773 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.773 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.773 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.773 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:50.773 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:51.712 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.712 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:51.712 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.712 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.712 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.712 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.712 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.712 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.712 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.972 00:21:51.972 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.972 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.972 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.232 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.232 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.232 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.232 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.232 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.232 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.232 { 00:21:52.232 "cntlid": 115, 00:21:52.232 "qid": 0, 00:21:52.232 "state": "enabled", 00:21:52.232 "thread": "nvmf_tgt_poll_group_000", 00:21:52.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:52.232 "listen_address": { 00:21:52.232 "trtype": "TCP", 00:21:52.232 "adrfam": "IPv4", 00:21:52.232 "traddr": "10.0.0.2", 00:21:52.232 "trsvcid": "4420" 00:21:52.232 }, 00:21:52.232 "peer_address": { 00:21:52.232 "trtype": "TCP", 00:21:52.232 "adrfam": "IPv4", 00:21:52.232 "traddr": "10.0.0.1", 00:21:52.232 "trsvcid": "49250" 00:21:52.232 }, 00:21:52.232 "auth": { 00:21:52.232 "state": "completed", 00:21:52.232 "digest": "sha512", 00:21:52.232 "dhgroup": "ffdhe3072" 00:21:52.232 } 00:21:52.232 } 00:21:52.232 ]' 00:21:52.232 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.232 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.232 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.232 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.232 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.493 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.493 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.493 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.493 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:52.493 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:53.433 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.433 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:53.433 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.433 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.433 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.433 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.433 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.434 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.434 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:53.434 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.434 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.434 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:53.434 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:53.434 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.434 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.434 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.434 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.694 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.694 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.695 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.695 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.695 00:21:53.695 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.695 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.695 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.955 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.955 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.955 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.955 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.955 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.955 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.955 { 00:21:53.955 "cntlid": 117, 00:21:53.955 "qid": 0, 00:21:53.955 "state": "enabled", 00:21:53.955 "thread": "nvmf_tgt_poll_group_000", 00:21:53.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:53.955 "listen_address": { 00:21:53.955 "trtype": "TCP", 00:21:53.955 "adrfam": "IPv4", 00:21:53.955 "traddr": "10.0.0.2", 00:21:53.955 "trsvcid": "4420" 00:21:53.955 }, 00:21:53.955 "peer_address": { 00:21:53.955 "trtype": "TCP", 00:21:53.955 "adrfam": "IPv4", 00:21:53.955 "traddr": "10.0.0.1", 00:21:53.955 "trsvcid": "49284" 00:21:53.955 }, 00:21:53.955 "auth": { 00:21:53.955 "state": "completed", 00:21:53.955 "digest": "sha512", 00:21:53.955 "dhgroup": "ffdhe3072" 00:21:53.955 } 00:21:53.955 } 00:21:53.955 ]' 00:21:53.955 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.955 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.955 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.215 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.215 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.215 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.215 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.215 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.215 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:54.215 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.155 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.415 00:21:55.415 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.415 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.415 10:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.675 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.675 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.675 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.675 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.675 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.675 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.675 { 00:21:55.675 "cntlid": 119, 00:21:55.675 "qid": 0, 00:21:55.675 "state": "enabled", 00:21:55.675 "thread": "nvmf_tgt_poll_group_000", 00:21:55.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:55.675 "listen_address": { 00:21:55.675 "trtype": "TCP", 00:21:55.675 "adrfam": "IPv4", 00:21:55.675 "traddr": "10.0.0.2", 00:21:55.675 "trsvcid": "4420" 00:21:55.675 }, 00:21:55.675 "peer_address": { 00:21:55.675 "trtype": "TCP", 00:21:55.675 "adrfam": "IPv4", 00:21:55.675 "traddr": "10.0.0.1", 00:21:55.675 "trsvcid": "49308" 00:21:55.675 }, 00:21:55.675 "auth": { 00:21:55.675 "state": "completed", 00:21:55.675 "digest": "sha512", 00:21:55.675 "dhgroup": "ffdhe3072" 00:21:55.675 } 00:21:55.675 } 00:21:55.675 ]' 00:21:55.675 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.675 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.675 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.675 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:55.675 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.935 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.935 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.935 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.935 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:55.935 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.875 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.135 00:21:57.135 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.135 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.135 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.394 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.395 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.395 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.395 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.395 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.395 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.395 { 00:21:57.395 "cntlid": 121, 00:21:57.395 "qid": 0, 00:21:57.395 "state": "enabled", 00:21:57.395 "thread": "nvmf_tgt_poll_group_000", 00:21:57.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:57.395 "listen_address": { 00:21:57.395 "trtype": "TCP", 00:21:57.395 "adrfam": "IPv4", 00:21:57.395 "traddr": "10.0.0.2", 00:21:57.395 "trsvcid": "4420" 00:21:57.395 }, 00:21:57.395 "peer_address": { 00:21:57.395 "trtype": "TCP", 00:21:57.395 "adrfam": "IPv4", 00:21:57.395 "traddr": "10.0.0.1", 00:21:57.395 "trsvcid": "49348" 00:21:57.395 }, 00:21:57.395 "auth": { 00:21:57.395 "state": "completed", 00:21:57.395 "digest": "sha512", 00:21:57.395 "dhgroup": "ffdhe4096" 00:21:57.395 } 00:21:57.395 } 00:21:57.395 ]' 00:21:57.395 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.395 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.395 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.395 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:57.395 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.655 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.655 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.655 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.655 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:57.655 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.627 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.627 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.627 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.908 00:21:58.908 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.908 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.908 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.194 { 00:21:59.194 "cntlid": 123, 00:21:59.194 "qid": 0, 00:21:59.194 "state": "enabled", 00:21:59.194 "thread": "nvmf_tgt_poll_group_000", 00:21:59.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:59.194 "listen_address": { 00:21:59.194 "trtype": "TCP", 00:21:59.194 "adrfam": "IPv4", 00:21:59.194 "traddr": "10.0.0.2", 00:21:59.194 "trsvcid": "4420" 00:21:59.194 }, 00:21:59.194 "peer_address": { 00:21:59.194 "trtype": "TCP", 00:21:59.194 "adrfam": "IPv4", 00:21:59.194 "traddr": "10.0.0.1", 00:21:59.194 "trsvcid": "58332" 00:21:59.194 }, 00:21:59.194 "auth": { 00:21:59.194 "state": "completed", 00:21:59.194 "digest": "sha512", 00:21:59.194 "dhgroup": "ffdhe4096" 00:21:59.194 } 00:21:59.194 } 00:21:59.194 ]' 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.194 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.506 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:21:59.506 10:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:22:00.095 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.095 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.095 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.095 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.095 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.095 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.095 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.095 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.356 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.616 00:22:00.616 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.616 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.616 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.877 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.877 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.877 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.877 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.877 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.877 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.877 { 00:22:00.877 "cntlid": 125, 00:22:00.877 "qid": 0, 00:22:00.877 "state": "enabled", 00:22:00.877 "thread": "nvmf_tgt_poll_group_000", 00:22:00.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:00.877 "listen_address": { 00:22:00.877 "trtype": "TCP", 00:22:00.877 "adrfam": "IPv4", 00:22:00.877 "traddr": "10.0.0.2", 00:22:00.877 "trsvcid": "4420" 00:22:00.877 }, 00:22:00.877 "peer_address": { 00:22:00.877 "trtype": "TCP", 00:22:00.877 "adrfam": "IPv4", 00:22:00.877 "traddr": "10.0.0.1", 00:22:00.877 "trsvcid": "58364" 00:22:00.877 }, 00:22:00.877 "auth": { 00:22:00.877 "state": "completed", 00:22:00.877 "digest": "sha512", 00:22:00.877 "dhgroup": "ffdhe4096" 00:22:00.877 } 00:22:00.877 } 00:22:00.877 ]' 00:22:00.877 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.877 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.877 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.877 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:00.877 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.138 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.138 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.138 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.138 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:22:01.138 10:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.079 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.339 00:22:02.339 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.339 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.339 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.599 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.599 10:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.599 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.599 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.599 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.599 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.599 { 00:22:02.599 "cntlid": 127, 00:22:02.599 "qid": 0, 00:22:02.599 "state": "enabled", 00:22:02.599 "thread": "nvmf_tgt_poll_group_000", 00:22:02.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:02.599 "listen_address": { 00:22:02.599 "trtype": "TCP", 00:22:02.599 "adrfam": "IPv4", 00:22:02.599 "traddr": "10.0.0.2", 00:22:02.599 "trsvcid": "4420" 00:22:02.599 }, 00:22:02.599 "peer_address": { 00:22:02.599 "trtype": "TCP", 00:22:02.599 "adrfam": "IPv4", 00:22:02.599 "traddr": "10.0.0.1", 00:22:02.599 "trsvcid": "58392" 00:22:02.599 }, 00:22:02.599 "auth": { 00:22:02.599 "state": "completed", 00:22:02.599 "digest": "sha512", 00:22:02.599 "dhgroup": "ffdhe4096" 00:22:02.599 } 00:22:02.599 } 00:22:02.599 ]' 00:22:02.599 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.599 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.599 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.860 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:02.860 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.860 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.860 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.860 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.860 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:22:02.860 10:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:22:03.802 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.802 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.802 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.802 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.802 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.802 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.802 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.802 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.803 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.374 00:22:04.374 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.374 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.374 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.374 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.374 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.374 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.374 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.374 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.374 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.374 { 00:22:04.374 "cntlid": 129, 00:22:04.374 "qid": 0, 00:22:04.374 "state": "enabled", 00:22:04.374 "thread": "nvmf_tgt_poll_group_000", 00:22:04.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:04.374 "listen_address": { 00:22:04.374 "trtype": "TCP", 00:22:04.374 "adrfam": "IPv4", 00:22:04.374 "traddr": "10.0.0.2", 00:22:04.374 "trsvcid": "4420" 00:22:04.374 }, 00:22:04.374 "peer_address": { 00:22:04.374 "trtype": "TCP", 00:22:04.374 "adrfam": "IPv4", 00:22:04.374 "traddr": "10.0.0.1", 00:22:04.374 "trsvcid": "58410" 00:22:04.374 }, 00:22:04.374 "auth": { 00:22:04.374 "state": "completed", 00:22:04.374 "digest": "sha512", 00:22:04.374 "dhgroup": "ffdhe6144" 00:22:04.374 } 00:22:04.374 } 00:22:04.374 ]' 00:22:04.374 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.634 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.634 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.634 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.634 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.634 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.634 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.634 10:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.895 10:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:22:04.895 10:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:22:05.464 10:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.464 10:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:05.464 10:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.464 10:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.464 10:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.464 10:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.464 10:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.464 10:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.724 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.984 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.245 { 00:22:06.245 "cntlid": 131, 00:22:06.245 "qid": 0, 00:22:06.245 "state": "enabled", 00:22:06.245 "thread": "nvmf_tgt_poll_group_000", 00:22:06.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:06.245 "listen_address": { 00:22:06.245 "trtype": "TCP", 00:22:06.245 "adrfam": "IPv4", 00:22:06.245 "traddr": "10.0.0.2", 00:22:06.245 "trsvcid": "4420" 00:22:06.245 }, 00:22:06.245 "peer_address": { 00:22:06.245 "trtype": "TCP", 00:22:06.245 "adrfam": "IPv4", 00:22:06.245 "traddr": "10.0.0.1", 00:22:06.245 "trsvcid": "58440" 00:22:06.245 }, 00:22:06.245 "auth": { 00:22:06.245 "state": "completed", 00:22:06.245 "digest": "sha512", 00:22:06.245 "dhgroup": "ffdhe6144" 00:22:06.245 } 00:22:06.245 } 00:22:06.245 ]' 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.245 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.507 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:06.507 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.507 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.507 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.507 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.507 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:22:06.507 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:22:07.448 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.448 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:07.448 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.448 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.448 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.448 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.448 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.448 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.709 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.970 00:22:07.970 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.970 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.970 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.230 { 00:22:08.230 "cntlid": 133, 00:22:08.230 "qid": 0, 00:22:08.230 "state": "enabled", 00:22:08.230 "thread": "nvmf_tgt_poll_group_000", 00:22:08.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:08.230 "listen_address": { 00:22:08.230 "trtype": "TCP", 00:22:08.230 "adrfam": "IPv4", 00:22:08.230 "traddr": "10.0.0.2", 00:22:08.230 "trsvcid": "4420" 00:22:08.230 }, 00:22:08.230 "peer_address": { 00:22:08.230 "trtype": "TCP", 00:22:08.230 "adrfam": "IPv4", 00:22:08.230 "traddr": "10.0.0.1", 00:22:08.230 "trsvcid": "58468" 00:22:08.230 }, 00:22:08.230 "auth": { 00:22:08.230 "state": "completed", 00:22:08.230 "digest": "sha512", 00:22:08.230 "dhgroup": "ffdhe6144" 00:22:08.230 } 00:22:08.230 } 00:22:08.230 ]' 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.230 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.491 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:22:08.491 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.430 10:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.690 00:22:09.690 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.690 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.690 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.950 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.950 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.950 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.950 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.950 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.950 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.950 { 00:22:09.950 "cntlid": 135, 00:22:09.950 "qid": 0, 00:22:09.950 "state": "enabled", 00:22:09.950 "thread": "nvmf_tgt_poll_group_000", 00:22:09.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:09.950 "listen_address": { 00:22:09.950 "trtype": "TCP", 00:22:09.950 "adrfam": "IPv4", 00:22:09.950 "traddr": "10.0.0.2", 00:22:09.950 "trsvcid": "4420" 00:22:09.950 }, 00:22:09.950 "peer_address": { 00:22:09.950 "trtype": "TCP", 00:22:09.950 "adrfam": "IPv4", 00:22:09.950 "traddr": "10.0.0.1", 00:22:09.950 "trsvcid": "37436" 00:22:09.950 }, 00:22:09.950 "auth": { 00:22:09.950 "state": "completed", 00:22:09.950 "digest": "sha512", 00:22:09.950 "dhgroup": "ffdhe6144" 00:22:09.950 } 00:22:09.950 } 00:22:09.950 ]' 00:22:09.950 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.950 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.950 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.950 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:09.950 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.210 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.210 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.210 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.211 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:22:10.211 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.148 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.716 00:22:11.716 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.716 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.716 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.975 { 00:22:11.975 "cntlid": 137, 00:22:11.975 "qid": 0, 00:22:11.975 "state": "enabled", 00:22:11.975 "thread": "nvmf_tgt_poll_group_000", 00:22:11.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:11.975 "listen_address": { 00:22:11.975 "trtype": "TCP", 00:22:11.975 "adrfam": "IPv4", 00:22:11.975 "traddr": "10.0.0.2", 00:22:11.975 "trsvcid": "4420" 00:22:11.975 }, 00:22:11.975 "peer_address": { 00:22:11.975 "trtype": "TCP", 00:22:11.975 "adrfam": "IPv4", 00:22:11.975 "traddr": "10.0.0.1", 00:22:11.975 "trsvcid": "37460" 00:22:11.975 }, 00:22:11.975 "auth": { 00:22:11.975 "state": "completed", 00:22:11.975 "digest": "sha512", 00:22:11.975 "dhgroup": "ffdhe8192" 00:22:11.975 } 00:22:11.975 } 00:22:11.975 ]' 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.975 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.234 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:22:12.235 10:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.173 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.742 00:22:13.742 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.742 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.742 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.002 { 00:22:14.002 "cntlid": 139, 00:22:14.002 "qid": 0, 00:22:14.002 "state": "enabled", 00:22:14.002 "thread": "nvmf_tgt_poll_group_000", 00:22:14.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:14.002 "listen_address": { 00:22:14.002 "trtype": "TCP", 00:22:14.002 "adrfam": "IPv4", 00:22:14.002 "traddr": "10.0.0.2", 00:22:14.002 "trsvcid": "4420" 00:22:14.002 }, 00:22:14.002 "peer_address": { 00:22:14.002 "trtype": "TCP", 00:22:14.002 "adrfam": "IPv4", 00:22:14.002 "traddr": "10.0.0.1", 00:22:14.002 "trsvcid": "37490" 00:22:14.002 }, 00:22:14.002 "auth": { 00:22:14.002 "state": "completed", 00:22:14.002 "digest": "sha512", 00:22:14.002 "dhgroup": "ffdhe8192" 00:22:14.002 } 00:22:14.002 } 00:22:14.002 ]' 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.002 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.262 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:22:14.262 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: --dhchap-ctrl-secret DHHC-1:02:Zjc4ZWExMzE0OTJlZDg3MWFiMWVkNDVjZjFhODg2YzEyZjMwNzk1YzQ3MTAyMTEza5MAAA==: 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.199 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.200 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.200 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.200 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.770 00:22:15.770 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.770 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.770 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.029 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.030 { 00:22:16.030 "cntlid": 141, 00:22:16.030 "qid": 0, 00:22:16.030 "state": "enabled", 00:22:16.030 "thread": "nvmf_tgt_poll_group_000", 00:22:16.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:16.030 "listen_address": { 00:22:16.030 "trtype": "TCP", 00:22:16.030 "adrfam": "IPv4", 00:22:16.030 "traddr": "10.0.0.2", 00:22:16.030 "trsvcid": "4420" 00:22:16.030 }, 00:22:16.030 "peer_address": { 00:22:16.030 "trtype": "TCP", 00:22:16.030 "adrfam": "IPv4", 00:22:16.030 "traddr": "10.0.0.1", 00:22:16.030 "trsvcid": "37508" 00:22:16.030 }, 00:22:16.030 "auth": { 00:22:16.030 "state": "completed", 00:22:16.030 "digest": "sha512", 00:22:16.030 "dhgroup": "ffdhe8192" 00:22:16.030 } 00:22:16.030 } 00:22:16.030 ]' 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.030 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.290 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:22:16.290 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:01:OTQ4NTE1ZjBlOTNhY2Q0NTM2MTRjMzMzOTllYmFiZGXhvvnA: 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.228 10:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.799 00:22:17.799 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.799 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.799 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.058 { 00:22:18.058 "cntlid": 143, 00:22:18.058 "qid": 0, 00:22:18.058 "state": "enabled", 00:22:18.058 "thread": "nvmf_tgt_poll_group_000", 00:22:18.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:18.058 "listen_address": { 00:22:18.058 "trtype": "TCP", 00:22:18.058 "adrfam": "IPv4", 00:22:18.058 "traddr": "10.0.0.2", 00:22:18.058 "trsvcid": "4420" 00:22:18.058 }, 00:22:18.058 "peer_address": { 00:22:18.058 "trtype": "TCP", 00:22:18.058 "adrfam": "IPv4", 00:22:18.058 "traddr": "10.0.0.1", 00:22:18.058 "trsvcid": "37528" 00:22:18.058 }, 00:22:18.058 "auth": { 00:22:18.058 "state": "completed", 00:22:18.058 "digest": "sha512", 00:22:18.058 "dhgroup": "ffdhe8192" 00:22:18.058 } 00:22:18.058 } 00:22:18.058 ]' 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.058 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.318 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:22:18.318 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:22:18.889 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.889 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.889 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.889 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.889 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:19.148 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.149 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.149 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.149 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.149 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.149 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.149 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.149 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.717 00:22:19.717 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.717 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.717 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.977 { 00:22:19.977 "cntlid": 145, 00:22:19.977 "qid": 0, 00:22:19.977 "state": "enabled", 00:22:19.977 "thread": "nvmf_tgt_poll_group_000", 00:22:19.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:19.977 "listen_address": { 00:22:19.977 "trtype": "TCP", 00:22:19.977 "adrfam": "IPv4", 00:22:19.977 "traddr": "10.0.0.2", 00:22:19.977 "trsvcid": "4420" 00:22:19.977 }, 00:22:19.977 "peer_address": { 00:22:19.977 "trtype": "TCP", 00:22:19.977 "adrfam": "IPv4", 00:22:19.977 "traddr": "10.0.0.1", 00:22:19.977 "trsvcid": "59964" 00:22:19.977 }, 00:22:19.977 "auth": { 00:22:19.977 "state": "completed", 00:22:19.977 "digest": "sha512", 00:22:19.977 "dhgroup": "ffdhe8192" 00:22:19.977 } 00:22:19.977 } 00:22:19.977 ]' 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.977 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.237 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:22:20.237 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MGI5OWMyYzA5NzE3MmFhZDJjZGE5ZWVjMzc4OGNlYjcwZjBiZmM3MDU4NDc2MGU2Y71tsg==: --dhchap-ctrl-secret DHHC-1:03:ODUxNmJmNzMxZDE2OGFkMWNlOGQ3ZWNlMDVmYWY1NzgxZmJiY2UzOTZlOTk1ODJjOTAzYmU3YmMzNjExNWZiZOLS5I8=: 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:21.176 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:21.435 request: 00:22:21.435 { 00:22:21.435 "name": "nvme0", 00:22:21.435 "trtype": "tcp", 00:22:21.435 "traddr": "10.0.0.2", 00:22:21.435 "adrfam": "ipv4", 00:22:21.435 "trsvcid": "4420", 00:22:21.435 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:21.435 "prchk_reftag": false, 00:22:21.435 "prchk_guard": false, 00:22:21.435 "hdgst": false, 00:22:21.435 "ddgst": false, 00:22:21.435 "dhchap_key": "key2", 00:22:21.435 "allow_unrecognized_csi": false, 00:22:21.435 "method": "bdev_nvme_attach_controller", 00:22:21.435 "req_id": 1 00:22:21.435 } 00:22:21.435 Got JSON-RPC error response 00:22:21.435 response: 00:22:21.435 { 00:22:21.435 "code": -5, 00:22:21.435 "message": "Input/output error" 00:22:21.435 } 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.435 10:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:22.005 request: 00:22:22.005 { 00:22:22.005 "name": "nvme0", 00:22:22.005 "trtype": "tcp", 00:22:22.005 "traddr": "10.0.0.2", 00:22:22.005 "adrfam": "ipv4", 00:22:22.005 "trsvcid": "4420", 00:22:22.005 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:22.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:22.005 "prchk_reftag": false, 00:22:22.005 "prchk_guard": false, 00:22:22.005 "hdgst": false, 00:22:22.005 "ddgst": false, 00:22:22.005 "dhchap_key": "key1", 00:22:22.005 "dhchap_ctrlr_key": "ckey2", 00:22:22.005 "allow_unrecognized_csi": false, 00:22:22.005 "method": "bdev_nvme_attach_controller", 00:22:22.005 "req_id": 1 00:22:22.005 } 00:22:22.005 Got JSON-RPC error response 00:22:22.005 response: 00:22:22.005 { 00:22:22.005 "code": -5, 00:22:22.005 "message": "Input/output error" 00:22:22.005 } 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.005 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.575 request: 00:22:22.575 { 00:22:22.575 "name": "nvme0", 00:22:22.575 "trtype": "tcp", 00:22:22.575 "traddr": "10.0.0.2", 00:22:22.575 "adrfam": "ipv4", 00:22:22.575 "trsvcid": "4420", 00:22:22.575 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:22.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:22.575 "prchk_reftag": false, 00:22:22.575 "prchk_guard": false, 00:22:22.575 "hdgst": false, 00:22:22.575 "ddgst": false, 00:22:22.575 "dhchap_key": "key1", 00:22:22.575 "dhchap_ctrlr_key": "ckey1", 00:22:22.575 "allow_unrecognized_csi": false, 00:22:22.575 "method": "bdev_nvme_attach_controller", 00:22:22.575 "req_id": 1 00:22:22.575 } 00:22:22.575 Got JSON-RPC error response 00:22:22.575 response: 00:22:22.575 { 00:22:22.575 "code": -5, 00:22:22.575 "message": "Input/output error" 00:22:22.575 } 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3864961 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3864961 ']' 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3864961 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3864961 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3864961' 00:22:22.575 killing process with pid 3864961 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3864961 00:22:22.575 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3864961 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3892395 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3892395 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3892395 ']' 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:22.836 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.776 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:23.776 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:23.776 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.776 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.776 10:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3892395 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3892395 ']' 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.776 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.776 null0 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.geb 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.b0o ]] 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b0o 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.yZG 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.C1Y ]] 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.C1Y 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:24.037 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AIj 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.L8t ]] 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L8t 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dOY 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.038 10:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.977 nvme0n1 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.977 { 00:22:24.977 "cntlid": 1, 00:22:24.977 "qid": 0, 00:22:24.977 "state": "enabled", 00:22:24.977 "thread": "nvmf_tgt_poll_group_000", 00:22:24.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:24.977 "listen_address": { 00:22:24.977 "trtype": "TCP", 00:22:24.977 "adrfam": "IPv4", 00:22:24.977 "traddr": "10.0.0.2", 00:22:24.977 "trsvcid": "4420" 00:22:24.977 }, 00:22:24.977 "peer_address": { 00:22:24.977 "trtype": "TCP", 00:22:24.977 "adrfam": "IPv4", 00:22:24.977 "traddr": "10.0.0.1", 00:22:24.977 "trsvcid": "60010" 00:22:24.977 }, 00:22:24.977 "auth": { 00:22:24.977 "state": "completed", 00:22:24.977 "digest": "sha512", 00:22:24.977 "dhgroup": "ffdhe8192" 00:22:24.977 } 00:22:24.977 } 00:22:24.977 ]' 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.977 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.237 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:25.237 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.237 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.237 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.237 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.237 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:22:25.237 10:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:22:26.178 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.178 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:26.178 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.178 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.178 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.178 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:26.178 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.178 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.178 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.178 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:26.178 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.490 request: 00:22:26.490 { 00:22:26.490 "name": "nvme0", 00:22:26.490 "trtype": "tcp", 00:22:26.490 "traddr": "10.0.0.2", 00:22:26.490 "adrfam": "ipv4", 00:22:26.490 "trsvcid": "4420", 00:22:26.490 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:26.490 "prchk_reftag": false, 00:22:26.490 "prchk_guard": false, 00:22:26.490 "hdgst": false, 00:22:26.490 "ddgst": false, 00:22:26.490 "dhchap_key": "key3", 00:22:26.490 "allow_unrecognized_csi": false, 00:22:26.490 "method": "bdev_nvme_attach_controller", 00:22:26.490 "req_id": 1 00:22:26.490 } 00:22:26.490 Got JSON-RPC error response 00:22:26.490 response: 00:22:26.490 { 00:22:26.490 "code": -5, 00:22:26.490 "message": "Input/output error" 00:22:26.490 } 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.490 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.491 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.491 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:26.491 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:26.491 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:26.491 10:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.797 request: 00:22:26.797 { 00:22:26.797 "name": "nvme0", 00:22:26.797 "trtype": "tcp", 00:22:26.797 "traddr": "10.0.0.2", 00:22:26.797 "adrfam": "ipv4", 00:22:26.797 "trsvcid": "4420", 00:22:26.797 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:26.797 "prchk_reftag": false, 00:22:26.797 "prchk_guard": false, 00:22:26.797 "hdgst": false, 00:22:26.797 "ddgst": false, 00:22:26.797 "dhchap_key": "key3", 00:22:26.797 "allow_unrecognized_csi": false, 00:22:26.797 "method": "bdev_nvme_attach_controller", 00:22:26.797 "req_id": 1 00:22:26.797 } 00:22:26.797 Got JSON-RPC error response 00:22:26.797 response: 00:22:26.797 { 00:22:26.797 "code": -5, 00:22:26.797 "message": "Input/output error" 00:22:26.797 } 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:26.797 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.062 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.323 request: 00:22:27.323 { 00:22:27.323 "name": "nvme0", 00:22:27.323 "trtype": "tcp", 00:22:27.323 "traddr": "10.0.0.2", 00:22:27.323 "adrfam": "ipv4", 00:22:27.323 "trsvcid": "4420", 00:22:27.323 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:27.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:27.323 "prchk_reftag": false, 00:22:27.323 "prchk_guard": false, 00:22:27.323 "hdgst": false, 00:22:27.323 "ddgst": false, 00:22:27.323 "dhchap_key": "key0", 00:22:27.323 "dhchap_ctrlr_key": "key1", 00:22:27.323 "allow_unrecognized_csi": false, 00:22:27.323 "method": "bdev_nvme_attach_controller", 00:22:27.323 "req_id": 1 00:22:27.323 } 00:22:27.323 Got JSON-RPC error response 00:22:27.323 response: 00:22:27.323 { 00:22:27.323 "code": -5, 00:22:27.323 "message": "Input/output error" 00:22:27.323 } 00:22:27.323 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:27.323 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.323 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.323 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.323 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:27.323 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:27.323 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:27.583 nvme0n1 00:22:27.583 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:27.583 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:27.583 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.843 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.843 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.843 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.102 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:28.102 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.103 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.103 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.103 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:28.103 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:28.103 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:29.041 nvme0n1 00:22:29.041 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:29.041 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:29.041 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.041 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.041 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.041 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.042 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.042 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.042 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:29.042 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:29.042 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.301 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.302 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:22:29.302 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: --dhchap-ctrl-secret DHHC-1:03:OTNmZmU3NWQxMzRiMmE3YWEwN2FlYjY5YzVhNGE2NDRiMDk5ZDk1MjhlZWI5MWYyMjBlNWE3OTBjNzI0NjcxZE+rJFw=: 00:22:29.870 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:29.870 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:29.870 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:29.870 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:29.870 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:29.870 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:29.870 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:29.870 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.870 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.130 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:30.130 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:30.130 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:30.130 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:30.130 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.130 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:30.130 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.130 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:30.130 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:30.130 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:30.700 request: 00:22:30.700 { 00:22:30.700 "name": "nvme0", 00:22:30.700 "trtype": "tcp", 00:22:30.700 "traddr": "10.0.0.2", 00:22:30.700 "adrfam": "ipv4", 00:22:30.700 "trsvcid": "4420", 00:22:30.700 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:30.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:30.700 "prchk_reftag": false, 00:22:30.700 "prchk_guard": false, 00:22:30.700 "hdgst": false, 00:22:30.700 "ddgst": false, 00:22:30.700 "dhchap_key": "key1", 00:22:30.700 "allow_unrecognized_csi": false, 00:22:30.700 "method": "bdev_nvme_attach_controller", 00:22:30.700 "req_id": 1 00:22:30.700 } 00:22:30.700 Got JSON-RPC error response 00:22:30.700 response: 00:22:30.700 { 00:22:30.700 "code": -5, 00:22:30.700 "message": "Input/output error" 00:22:30.700 } 00:22:30.700 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:30.700 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.700 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.700 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.700 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.700 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.700 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:31.639 nvme0n1 00:22:31.639 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:31.639 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:31.639 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.639 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.639 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.639 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.898 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:31.898 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.898 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.898 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.898 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:31.898 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:31.898 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:31.898 nvme0n1 00:22:32.158 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:32.158 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:32.159 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.159 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.159 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.159 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: '' 2s 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: ]] 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTNlZjc4NjVlYzJmMmVhYjJhNjc5MTgzYzA2YWFmOWL4gNum: 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:32.419 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:34.329 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:34.329 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:34.329 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:34.329 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:34.329 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:34.329 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:34.329 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:34.329 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:34.329 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.329 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: 2s 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: ]] 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDI0MjJhYTFhZTNhZDIwNDdhMjFkMmJmMTFmOTk4NmNmNmM0YjUwMTliZGQzYmE4VPF97A==: 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:34.589 10:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:36.497 10:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:37.437 nvme0n1 00:22:37.437 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:37.437 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.437 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.437 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.437 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:37.437 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:38.007 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:38.007 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:38.007 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.007 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.007 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.007 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.007 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.266 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.266 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:38.266 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:38.266 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:38.266 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:38.266 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:38.527 10:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:39.097 request: 00:22:39.097 { 00:22:39.097 "name": "nvme0", 00:22:39.097 "dhchap_key": "key1", 00:22:39.097 "dhchap_ctrlr_key": "key3", 00:22:39.097 "method": "bdev_nvme_set_keys", 00:22:39.097 "req_id": 1 00:22:39.097 } 00:22:39.097 Got JSON-RPC error response 00:22:39.097 response: 00:22:39.097 { 00:22:39.097 "code": -13, 00:22:39.097 "message": "Permission denied" 00:22:39.097 } 00:22:39.097 10:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:39.097 10:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:39.097 10:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:39.097 10:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:39.097 10:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:39.097 10:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:39.097 10:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.097 10:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:39.097 10:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:40.480 10:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:40.480 10:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:40.480 10:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.480 10:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:40.480 10:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:40.480 10:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.480 10:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.480 10:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.480 10:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:40.480 10:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:40.480 10:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:41.051 nvme0n1 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:41.312 10:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:41.882 request: 00:22:41.882 { 00:22:41.882 "name": "nvme0", 00:22:41.882 "dhchap_key": "key2", 00:22:41.882 "dhchap_ctrlr_key": "key0", 00:22:41.882 "method": "bdev_nvme_set_keys", 00:22:41.882 "req_id": 1 00:22:41.882 } 00:22:41.882 Got JSON-RPC error response 00:22:41.882 response: 00:22:41.882 { 00:22:41.882 "code": -13, 00:22:41.882 "message": "Permission denied" 00:22:41.882 } 00:22:41.882 10:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:41.882 10:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:41.882 10:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:41.882 10:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:41.882 10:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:41.882 10:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:41.882 10:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.882 10:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:41.882 10:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:42.822 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:42.822 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.822 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3865215 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3865215 ']' 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3865215 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3865215 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3865215' 00:22:43.082 killing process with pid 3865215 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3865215 00:22:43.082 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3865215 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.343 rmmod nvme_tcp 00:22:43.343 rmmod nvme_fabrics 00:22:43.343 rmmod nvme_keyring 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3892395 ']' 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3892395 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3892395 ']' 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3892395 00:22:43.343 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:43.344 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:43.344 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3892395 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3892395' 00:22:43.604 killing process with pid 3892395 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3892395 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3892395 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.604 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.geb /tmp/spdk.key-sha256.yZG /tmp/spdk.key-sha384.AIj /tmp/spdk.key-sha512.dOY /tmp/spdk.key-sha512.b0o /tmp/spdk.key-sha384.C1Y /tmp/spdk.key-sha256.L8t '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:46.146 00:22:46.146 real 2m46.467s 00:22:46.146 user 6m9.130s 00:22:46.146 sys 0m25.307s 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.146 ************************************ 00:22:46.146 END TEST nvmf_auth_target 00:22:46.146 ************************************ 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:46.146 ************************************ 00:22:46.146 START TEST nvmf_bdevio_no_huge 00:22:46.146 ************************************ 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:46.146 * Looking for test storage... 00:22:46.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:46.146 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:46.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.147 --rc genhtml_branch_coverage=1 00:22:46.147 --rc genhtml_function_coverage=1 00:22:46.147 --rc genhtml_legend=1 00:22:46.147 --rc geninfo_all_blocks=1 00:22:46.147 --rc geninfo_unexecuted_blocks=1 00:22:46.147 00:22:46.147 ' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:46.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.147 --rc genhtml_branch_coverage=1 00:22:46.147 --rc genhtml_function_coverage=1 00:22:46.147 --rc genhtml_legend=1 00:22:46.147 --rc geninfo_all_blocks=1 00:22:46.147 --rc geninfo_unexecuted_blocks=1 00:22:46.147 00:22:46.147 ' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:46.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.147 --rc genhtml_branch_coverage=1 00:22:46.147 --rc genhtml_function_coverage=1 00:22:46.147 --rc genhtml_legend=1 00:22:46.147 --rc geninfo_all_blocks=1 00:22:46.147 --rc geninfo_unexecuted_blocks=1 00:22:46.147 00:22:46.147 ' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:46.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.147 --rc genhtml_branch_coverage=1 00:22:46.147 --rc genhtml_function_coverage=1 00:22:46.147 --rc genhtml_legend=1 00:22:46.147 --rc geninfo_all_blocks=1 00:22:46.147 --rc geninfo_unexecuted_blocks=1 00:22:46.147 00:22:46.147 ' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:46.147 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:54.281 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.281 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:54.282 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:54.282 Found net devices under 0000:31:00.0: cvl_0_0 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:54.282 Found net devices under 0000:31:00.1: cvl_0_1 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:54.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:22:54.282 00:22:54.282 --- 10.0.0.2 ping statistics --- 00:22:54.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.282 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:22:54.282 00:22:54.282 --- 10.0.0.1 ping statistics --- 00:22:54.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.282 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3901152 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3901152 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3901152 ']' 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:54.282 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.282 [2024-11-06 10:14:57.511969] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:54.282 [2024-11-06 10:14:57.512044] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:54.282 [2024-11-06 10:14:57.628599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.282 [2024-11-06 10:14:57.688096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.282 [2024-11-06 10:14:57.688145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.282 [2024-11-06 10:14:57.688153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.282 [2024-11-06 10:14:57.688160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.282 [2024-11-06 10:14:57.688166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.282 [2024-11-06 10:14:57.689783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:54.282 [2024-11-06 10:14:57.689932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:54.283 [2024-11-06 10:14:57.690096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:54.283 [2024-11-06 10:14:57.690196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.853 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:54.853 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:22:54.853 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.853 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:54.853 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.115 [2024-11-06 10:14:58.395489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.115 Malloc0 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.115 [2024-11-06 10:14:58.449215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.115 { 00:22:55.115 "params": { 00:22:55.115 "name": "Nvme$subsystem", 00:22:55.115 "trtype": "$TEST_TRANSPORT", 00:22:55.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.115 "adrfam": "ipv4", 00:22:55.115 "trsvcid": "$NVMF_PORT", 00:22:55.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.115 "hdgst": ${hdgst:-false}, 00:22:55.115 "ddgst": ${ddgst:-false} 00:22:55.115 }, 00:22:55.115 "method": "bdev_nvme_attach_controller" 00:22:55.115 } 00:22:55.115 EOF 00:22:55.115 )") 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:55.115 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:55.115 "params": { 00:22:55.115 "name": "Nvme1", 00:22:55.115 "trtype": "tcp", 00:22:55.115 "traddr": "10.0.0.2", 00:22:55.115 "adrfam": "ipv4", 00:22:55.115 "trsvcid": "4420", 00:22:55.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.115 "hdgst": false, 00:22:55.115 "ddgst": false 00:22:55.115 }, 00:22:55.115 "method": "bdev_nvme_attach_controller" 00:22:55.115 }' 00:22:55.115 [2024-11-06 10:14:58.508910] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:55.115 [2024-11-06 10:14:58.508981] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3901276 ] 00:22:55.115 [2024-11-06 10:14:58.598208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:55.376 [2024-11-06 10:14:58.653215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.376 [2024-11-06 10:14:58.653331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.376 [2024-11-06 10:14:58.653334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.635 I/O targets: 00:22:55.635 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:55.635 00:22:55.635 00:22:55.635 CUnit - A unit testing framework for C - Version 2.1-3 00:22:55.635 http://cunit.sourceforge.net/ 00:22:55.635 00:22:55.635 00:22:55.635 Suite: bdevio tests on: Nvme1n1 00:22:55.635 Test: blockdev write read block ...passed 00:22:55.635 Test: blockdev write zeroes read block ...passed 00:22:55.635 Test: blockdev write zeroes read no split ...passed 00:22:55.635 Test: blockdev write zeroes read split ...passed 00:22:55.896 Test: blockdev write zeroes read split partial ...passed 00:22:55.896 Test: blockdev reset ...[2024-11-06 10:14:59.153100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:55.896 [2024-11-06 10:14:59.153168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2cfb0 (9): Bad file descriptor 00:22:55.896 [2024-11-06 10:14:59.168225] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:55.896 passed 00:22:55.896 Test: blockdev write read 8 blocks ...passed 00:22:55.896 Test: blockdev write read size > 128k ...passed 00:22:55.896 Test: blockdev write read invalid size ...passed 00:22:55.896 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:55.896 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:55.896 Test: blockdev write read max offset ...passed 00:22:55.896 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:55.896 Test: blockdev writev readv 8 blocks ...passed 00:22:55.896 Test: blockdev writev readv 30 x 1block ...passed 00:22:55.896 Test: blockdev writev readv block ...passed 00:22:55.896 Test: blockdev writev readv size > 128k ...passed 00:22:55.896 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:55.896 Test: blockdev comparev and writev ...[2024-11-06 10:14:59.385413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.896 [2024-11-06 10:14:59.385437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:55.896 [2024-11-06 10:14:59.385448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.896 [2024-11-06 10:14:59.385454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.896 [2024-11-06 10:14:59.385669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.896 [2024-11-06 10:14:59.385678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:55.896 [2024-11-06 10:14:59.385687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.896 [2024-11-06 10:14:59.385693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:55.896 [2024-11-06 10:14:59.385918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.896 [2024-11-06 10:14:59.385928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:55.896 [2024-11-06 10:14:59.385937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.896 [2024-11-06 10:14:59.385944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:55.896 [2024-11-06 10:14:59.386149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.896 [2024-11-06 10:14:59.386158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:55.896 [2024-11-06 10:14:59.386168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:55.896 [2024-11-06 10:14:59.386174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:56.156 passed 00:22:56.156 Test: blockdev nvme passthru rw ...passed 00:22:56.156 Test: blockdev nvme passthru vendor specific ...[2024-11-06 10:14:59.469220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:56.156 [2024-11-06 10:14:59.469232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:56.156 [2024-11-06 10:14:59.469314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:56.156 [2024-11-06 10:14:59.469320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:56.156 [2024-11-06 10:14:59.469404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:56.156 [2024-11-06 10:14:59.469411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:56.156 [2024-11-06 10:14:59.469498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:56.156 [2024-11-06 10:14:59.469505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:56.156 passed 00:22:56.156 Test: blockdev nvme admin passthru ...passed 00:22:56.156 Test: blockdev copy ...passed 00:22:56.156 00:22:56.156 Run Summary: Type Total Ran Passed Failed Inactive 00:22:56.156 suites 1 1 n/a 0 0 00:22:56.156 tests 23 23 23 0 0 00:22:56.156 asserts 152 152 152 0 n/a 00:22:56.156 00:22:56.156 Elapsed time = 1.186 seconds 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:56.416 rmmod nvme_tcp 00:22:56.416 rmmod nvme_fabrics 00:22:56.416 rmmod nvme_keyring 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3901152 ']' 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3901152 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3901152 ']' 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3901152 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:56.416 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3901152 00:22:56.677 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:22:56.677 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:22:56.677 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3901152' 00:22:56.677 killing process with pid 3901152 00:22:56.677 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3901152 00:22:56.677 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3901152 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.938 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.485 00:22:59.485 real 0m13.244s 00:22:59.485 user 0m14.660s 00:22:59.485 sys 0m7.252s 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:59.485 ************************************ 00:22:59.485 END TEST nvmf_bdevio_no_huge 00:22:59.485 ************************************ 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:59.485 ************************************ 00:22:59.485 START TEST nvmf_tls 00:22:59.485 ************************************ 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:59.485 * Looking for test storage... 00:22:59.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.485 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:59.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.485 --rc genhtml_branch_coverage=1 00:22:59.485 --rc genhtml_function_coverage=1 00:22:59.485 --rc genhtml_legend=1 00:22:59.485 --rc geninfo_all_blocks=1 00:22:59.485 --rc geninfo_unexecuted_blocks=1 00:22:59.485 00:22:59.485 ' 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:59.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.486 --rc genhtml_branch_coverage=1 00:22:59.486 --rc genhtml_function_coverage=1 00:22:59.486 --rc genhtml_legend=1 00:22:59.486 --rc geninfo_all_blocks=1 00:22:59.486 --rc geninfo_unexecuted_blocks=1 00:22:59.486 00:22:59.486 ' 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:59.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.486 --rc genhtml_branch_coverage=1 00:22:59.486 --rc genhtml_function_coverage=1 00:22:59.486 --rc genhtml_legend=1 00:22:59.486 --rc geninfo_all_blocks=1 00:22:59.486 --rc geninfo_unexecuted_blocks=1 00:22:59.486 00:22:59.486 ' 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:59.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.486 --rc genhtml_branch_coverage=1 00:22:59.486 --rc genhtml_function_coverage=1 00:22:59.486 --rc genhtml_legend=1 00:22:59.486 --rc geninfo_all_blocks=1 00:22:59.486 --rc geninfo_unexecuted_blocks=1 00:22:59.486 00:22:59.486 ' 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:59.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.486 10:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.640 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:07.641 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:07.641 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:07.641 Found net devices under 0000:31:00.0: cvl_0_0 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:07.641 Found net devices under 0000:31:00.1: cvl_0_1 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.641 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:23:07.902 00:23:07.902 --- 10.0.0.2 ping statistics --- 00:23:07.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.902 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:23:07.902 00:23:07.902 --- 10.0.0.1 ping statistics --- 00:23:07.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.902 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:07.902 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.164 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3906968 00:23:08.164 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3906968 00:23:08.164 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:08.164 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3906968 ']' 00:23:08.164 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.164 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:08.164 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.164 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:08.164 10:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.164 [2024-11-06 10:15:11.460206] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:08.164 [2024-11-06 10:15:11.460276] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.164 [2024-11-06 10:15:11.571479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.164 [2024-11-06 10:15:11.621448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.164 [2024-11-06 10:15:11.621502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.164 [2024-11-06 10:15:11.621511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.164 [2024-11-06 10:15:11.621518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.164 [2024-11-06 10:15:11.621525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.164 [2024-11-06 10:15:11.622338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.107 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:09.107 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:09.107 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.107 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.107 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.107 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.107 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:09.107 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:09.107 true 00:23:09.107 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.107 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:09.368 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:09.368 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:09.368 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:09.368 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.629 10:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:09.629 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:09.629 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:09.629 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:09.889 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.889 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:10.150 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:10.150 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:10.150 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:10.150 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:10.150 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:10.150 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:10.150 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:10.410 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:10.410 10:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:10.671 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:10.671 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:10.671 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:10.932 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:11.192 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:11.192 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:11.192 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ZA1AqHZlk0 00:23:11.192 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:11.192 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.TAOJxOAhTC 00:23:11.192 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:11.192 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:11.192 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ZA1AqHZlk0 00:23:11.192 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.TAOJxOAhTC 00:23:11.192 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:11.192 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:11.453 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ZA1AqHZlk0 00:23:11.453 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZA1AqHZlk0 00:23:11.453 10:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:11.714 [2024-11-06 10:15:15.078282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.714 10:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:11.974 10:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:11.974 [2024-11-06 10:15:15.415094] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.974 [2024-11-06 10:15:15.415293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.974 10:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:12.235 malloc0 00:23:12.235 10:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:12.496 10:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZA1AqHZlk0 00:23:12.496 10:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:12.756 10:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZA1AqHZlk0 00:23:22.754 Initializing NVMe Controllers 00:23:22.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:22.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:22.754 Initialization complete. Launching workers. 00:23:22.754 ======================================================== 00:23:22.754 Latency(us) 00:23:22.754 Device Information : IOPS MiB/s Average min max 00:23:22.754 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18550.13 72.46 3450.15 1129.70 4084.41 00:23:22.754 ======================================================== 00:23:22.754 Total : 18550.13 72.46 3450.15 1129.70 4084.41 00:23:22.754 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZA1AqHZlk0 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZA1AqHZlk0 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3909909 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3909909 /var/tmp/bdevperf.sock 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3909909 ']' 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:22.754 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.754 [2024-11-06 10:15:26.242505] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:22.754 [2024-11-06 10:15:26.242563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909909 ] 00:23:23.014 [2024-11-06 10:15:26.305994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.014 [2024-11-06 10:15:26.334929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.014 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:23.014 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:23.014 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZA1AqHZlk0 00:23:23.276 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:23.536 [2024-11-06 10:15:26.796413] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.536 TLSTESTn1 00:23:23.536 10:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:23.536 Running I/O for 10 seconds... 00:23:25.860 5089.00 IOPS, 19.88 MiB/s [2024-11-06T09:15:30.303Z] 5797.50 IOPS, 22.65 MiB/s [2024-11-06T09:15:31.244Z] 5840.33 IOPS, 22.81 MiB/s [2024-11-06T09:15:32.183Z] 5758.25 IOPS, 22.49 MiB/s [2024-11-06T09:15:33.124Z] 5568.00 IOPS, 21.75 MiB/s [2024-11-06T09:15:34.065Z] 5732.83 IOPS, 22.39 MiB/s [2024-11-06T09:15:35.019Z] 5593.43 IOPS, 21.85 MiB/s [2024-11-06T09:15:36.112Z] 5564.12 IOPS, 21.73 MiB/s [2024-11-06T09:15:37.051Z] 5601.33 IOPS, 21.88 MiB/s [2024-11-06T09:15:37.051Z] 5665.20 IOPS, 22.13 MiB/s 00:23:33.550 Latency(us) 00:23:33.550 [2024-11-06T09:15:37.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.550 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:33.550 Verification LBA range: start 0x0 length 0x2000 00:23:33.550 TLSTESTn1 : 10.01 5670.28 22.15 0.00 0.00 22541.83 4751.36 79080.11 00:23:33.550 [2024-11-06T09:15:37.051Z] =================================================================================================================== 00:23:33.550 [2024-11-06T09:15:37.051Z] Total : 5670.28 22.15 0.00 0.00 22541.83 4751.36 79080.11 00:23:33.550 { 00:23:33.550 "results": [ 00:23:33.550 { 00:23:33.550 "job": "TLSTESTn1", 00:23:33.550 "core_mask": "0x4", 00:23:33.550 "workload": "verify", 00:23:33.550 "status": "finished", 00:23:33.550 "verify_range": { 00:23:33.550 "start": 0, 00:23:33.550 "length": 8192 00:23:33.550 }, 00:23:33.550 "queue_depth": 128, 00:23:33.550 "io_size": 4096, 00:23:33.550 "runtime": 10.01344, 00:23:33.550 "iops": 5670.279144829349, 00:23:33.550 "mibps": 22.149527909489645, 00:23:33.550 "io_failed": 0, 00:23:33.550 "io_timeout": 0, 00:23:33.550 "avg_latency_us": 22541.832960308097, 00:23:33.550 "min_latency_us": 4751.36, 00:23:33.550 "max_latency_us": 79080.10666666667 00:23:33.550 } 00:23:33.550 ], 00:23:33.550 "core_count": 1 00:23:33.550 } 00:23:33.550 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.550 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3909909 00:23:33.550 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3909909 ']' 00:23:33.550 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3909909 00:23:33.550 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:33.550 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:33.550 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3909909 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3909909' 00:23:33.811 killing process with pid 3909909 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3909909 00:23:33.811 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.811 00:23:33.811 Latency(us) 00:23:33.811 [2024-11-06T09:15:37.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.811 [2024-11-06T09:15:37.312Z] =================================================================================================================== 00:23:33.811 [2024-11-06T09:15:37.312Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3909909 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TAOJxOAhTC 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TAOJxOAhTC 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TAOJxOAhTC 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TAOJxOAhTC 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3911949 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3911949 /var/tmp/bdevperf.sock 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3911949 ']' 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:33.811 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.811 [2024-11-06 10:15:37.251686] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:33.811 [2024-11-06 10:15:37.251741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911949 ] 00:23:34.072 [2024-11-06 10:15:37.316469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.072 [2024-11-06 10:15:37.344710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.072 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:34.072 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:34.072 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TAOJxOAhTC 00:23:34.333 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.333 [2024-11-06 10:15:37.766072] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.333 [2024-11-06 10:15:37.773809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:34.333 [2024-11-06 10:15:37.774294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2960 (107): Transport endpoint is not connected 00:23:34.333 [2024-11-06 10:15:37.775290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c2960 (9): Bad file descriptor 00:23:34.333 [2024-11-06 10:15:37.776292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:34.333 [2024-11-06 10:15:37.776301] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:34.333 [2024-11-06 10:15:37.776307] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:34.333 [2024-11-06 10:15:37.776315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:34.333 request: 00:23:34.333 { 00:23:34.333 "name": "TLSTEST", 00:23:34.333 "trtype": "tcp", 00:23:34.333 "traddr": "10.0.0.2", 00:23:34.333 "adrfam": "ipv4", 00:23:34.333 "trsvcid": "4420", 00:23:34.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.333 "prchk_reftag": false, 00:23:34.333 "prchk_guard": false, 00:23:34.333 "hdgst": false, 00:23:34.333 "ddgst": false, 00:23:34.333 "psk": "key0", 00:23:34.333 "allow_unrecognized_csi": false, 00:23:34.333 "method": "bdev_nvme_attach_controller", 00:23:34.333 "req_id": 1 00:23:34.333 } 00:23:34.333 Got JSON-RPC error response 00:23:34.333 response: 00:23:34.333 { 00:23:34.333 "code": -5, 00:23:34.333 "message": "Input/output error" 00:23:34.333 } 00:23:34.333 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3911949 00:23:34.333 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3911949 ']' 00:23:34.333 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3911949 00:23:34.333 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:34.333 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:34.333 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3911949 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3911949' 00:23:34.594 killing process with pid 3911949 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3911949 00:23:34.594 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.594 00:23:34.594 Latency(us) 00:23:34.594 [2024-11-06T09:15:38.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.594 [2024-11-06T09:15:38.095Z] =================================================================================================================== 00:23:34.594 [2024-11-06T09:15:38.095Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3911949 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZA1AqHZlk0 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZA1AqHZlk0 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZA1AqHZlk0 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:34.594 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZA1AqHZlk0 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3912265 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3912265 /var/tmp/bdevperf.sock 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3912265 ']' 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:34.595 10:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.595 [2024-11-06 10:15:38.016476] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:34.595 [2024-11-06 10:15:38.016532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912265 ] 00:23:34.595 [2024-11-06 10:15:38.081017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.855 [2024-11-06 10:15:38.109585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.855 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:34.855 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:34.855 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZA1AqHZlk0 00:23:35.115 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:35.115 [2024-11-06 10:15:38.518807] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.115 [2024-11-06 10:15:38.525555] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:35.115 [2024-11-06 10:15:38.525574] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:35.115 [2024-11-06 10:15:38.525594] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:35.115 [2024-11-06 10:15:38.525987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e7960 (107): Transport endpoint is not connected 00:23:35.115 [2024-11-06 10:15:38.526982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e7960 (9): Bad file descriptor 00:23:35.115 [2024-11-06 10:15:38.527985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:35.115 [2024-11-06 10:15:38.527993] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:35.115 [2024-11-06 10:15:38.527999] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:35.115 [2024-11-06 10:15:38.528007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:35.115 request: 00:23:35.115 { 00:23:35.115 "name": "TLSTEST", 00:23:35.115 "trtype": "tcp", 00:23:35.115 "traddr": "10.0.0.2", 00:23:35.116 "adrfam": "ipv4", 00:23:35.116 "trsvcid": "4420", 00:23:35.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.116 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:35.116 "prchk_reftag": false, 00:23:35.116 "prchk_guard": false, 00:23:35.116 "hdgst": false, 00:23:35.116 "ddgst": false, 00:23:35.116 "psk": "key0", 00:23:35.116 "allow_unrecognized_csi": false, 00:23:35.116 "method": "bdev_nvme_attach_controller", 00:23:35.116 "req_id": 1 00:23:35.116 } 00:23:35.116 Got JSON-RPC error response 00:23:35.116 response: 00:23:35.116 { 00:23:35.116 "code": -5, 00:23:35.116 "message": "Input/output error" 00:23:35.116 } 00:23:35.116 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3912265 00:23:35.116 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3912265 ']' 00:23:35.116 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3912265 00:23:35.116 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:35.116 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:35.116 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3912265 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3912265' 00:23:35.376 killing process with pid 3912265 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3912265 00:23:35.376 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.376 00:23:35.376 Latency(us) 00:23:35.376 [2024-11-06T09:15:38.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.376 [2024-11-06T09:15:38.877Z] =================================================================================================================== 00:23:35.376 [2024-11-06T09:15:38.877Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3912265 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZA1AqHZlk0 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZA1AqHZlk0 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZA1AqHZlk0 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZA1AqHZlk0 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3912287 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3912287 /var/tmp/bdevperf.sock 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3912287 ']' 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:35.376 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.376 [2024-11-06 10:15:38.772218] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:35.376 [2024-11-06 10:15:38.772270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912287 ] 00:23:35.376 [2024-11-06 10:15:38.836749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.376 [2024-11-06 10:15:38.865500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.636 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:35.636 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:35.636 10:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZA1AqHZlk0 00:23:35.636 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.896 [2024-11-06 10:15:39.286864] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.896 [2024-11-06 10:15:39.297418] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:35.896 [2024-11-06 10:15:39.297436] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:35.896 [2024-11-06 10:15:39.297454] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:35.896 [2024-11-06 10:15:39.297919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3e960 (107): Transport endpoint is not connected 00:23:35.896 [2024-11-06 10:15:39.298916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3e960 (9): Bad file descriptor 00:23:35.896 [2024-11-06 10:15:39.299918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:35.896 [2024-11-06 10:15:39.299926] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:35.896 [2024-11-06 10:15:39.299932] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:35.896 [2024-11-06 10:15:39.299940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:35.896 request: 00:23:35.896 { 00:23:35.896 "name": "TLSTEST", 00:23:35.896 "trtype": "tcp", 00:23:35.896 "traddr": "10.0.0.2", 00:23:35.896 "adrfam": "ipv4", 00:23:35.896 "trsvcid": "4420", 00:23:35.896 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:35.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.896 "prchk_reftag": false, 00:23:35.896 "prchk_guard": false, 00:23:35.896 "hdgst": false, 00:23:35.896 "ddgst": false, 00:23:35.896 "psk": "key0", 00:23:35.896 "allow_unrecognized_csi": false, 00:23:35.896 "method": "bdev_nvme_attach_controller", 00:23:35.896 "req_id": 1 00:23:35.896 } 00:23:35.896 Got JSON-RPC error response 00:23:35.896 response: 00:23:35.896 { 00:23:35.896 "code": -5, 00:23:35.896 "message": "Input/output error" 00:23:35.896 } 00:23:35.896 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3912287 00:23:35.896 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3912287 ']' 00:23:35.896 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3912287 00:23:35.896 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:35.896 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:35.896 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3912287 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3912287' 00:23:36.157 killing process with pid 3912287 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3912287 00:23:36.157 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.157 00:23:36.157 Latency(us) 00:23:36.157 [2024-11-06T09:15:39.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.157 [2024-11-06T09:15:39.658Z] =================================================================================================================== 00:23:36.157 [2024-11-06T09:15:39.658Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3912287 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3912595 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3912595 /var/tmp/bdevperf.sock 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3912595 ']' 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:36.157 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.157 [2024-11-06 10:15:39.547454] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:36.157 [2024-11-06 10:15:39.547509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912595 ] 00:23:36.157 [2024-11-06 10:15:39.612004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.157 [2024-11-06 10:15:39.640521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.417 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:36.417 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:36.417 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:36.417 [2024-11-06 10:15:39.873385] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:36.417 [2024-11-06 10:15:39.873412] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:36.417 request: 00:23:36.417 { 00:23:36.417 "name": "key0", 00:23:36.417 "path": "", 00:23:36.417 "method": "keyring_file_add_key", 00:23:36.417 "req_id": 1 00:23:36.417 } 00:23:36.417 Got JSON-RPC error response 00:23:36.417 response: 00:23:36.417 { 00:23:36.417 "code": -1, 00:23:36.417 "message": "Operation not permitted" 00:23:36.417 } 00:23:36.417 10:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.677 [2024-11-06 10:15:40.058055] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.677 [2024-11-06 10:15:40.058091] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:36.677 request: 00:23:36.677 { 00:23:36.677 "name": "TLSTEST", 00:23:36.677 "trtype": "tcp", 00:23:36.677 "traddr": "10.0.0.2", 00:23:36.677 "adrfam": "ipv4", 00:23:36.677 "trsvcid": "4420", 00:23:36.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.677 "prchk_reftag": false, 00:23:36.677 "prchk_guard": false, 00:23:36.677 "hdgst": false, 00:23:36.677 "ddgst": false, 00:23:36.677 "psk": "key0", 00:23:36.677 "allow_unrecognized_csi": false, 00:23:36.677 "method": "bdev_nvme_attach_controller", 00:23:36.677 "req_id": 1 00:23:36.677 } 00:23:36.677 Got JSON-RPC error response 00:23:36.677 response: 00:23:36.677 { 00:23:36.677 "code": -126, 00:23:36.677 "message": "Required key not available" 00:23:36.677 } 00:23:36.677 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3912595 00:23:36.677 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3912595 ']' 00:23:36.677 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3912595 00:23:36.677 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:36.677 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:36.677 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3912595 00:23:36.677 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:36.677 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:36.677 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3912595' 00:23:36.677 killing process with pid 3912595 00:23:36.677 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3912595 00:23:36.677 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.677 00:23:36.677 Latency(us) 00:23:36.677 [2024-11-06T09:15:40.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.677 [2024-11-06T09:15:40.178Z] =================================================================================================================== 00:23:36.677 [2024-11-06T09:15:40.178Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:36.677 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3912595 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3906968 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3906968 ']' 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3906968 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3906968 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3906968' 00:23:36.938 killing process with pid 3906968 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3906968 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3906968 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:36.938 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.VKr3S3tjPH 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.VKr3S3tjPH 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3912651 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3912651 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3912651 ']' 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:37.200 10:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.200 [2024-11-06 10:15:40.546298] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:37.200 [2024-11-06 10:15:40.546367] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.200 [2024-11-06 10:15:40.650959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.200 [2024-11-06 10:15:40.688817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.200 [2024-11-06 10:15:40.688871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.200 [2024-11-06 10:15:40.688878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.200 [2024-11-06 10:15:40.688884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.200 [2024-11-06 10:15:40.688889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.200 [2024-11-06 10:15:40.689496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.141 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:38.141 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:38.141 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.141 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:38.141 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.141 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.141 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.VKr3S3tjPH 00:23:38.141 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VKr3S3tjPH 00:23:38.141 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:38.141 [2024-11-06 10:15:41.527163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.141 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:38.401 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:38.401 [2024-11-06 10:15:41.859981] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.401 [2024-11-06 10:15:41.860154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.401 10:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:38.661 malloc0 00:23:38.661 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:38.921 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VKr3S3tjPH 00:23:38.922 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VKr3S3tjPH 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VKr3S3tjPH 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3913084 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3913084 /var/tmp/bdevperf.sock 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3913084 ']' 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:39.181 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.181 [2024-11-06 10:15:42.542544] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:39.181 [2024-11-06 10:15:42.542617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3913084 ] 00:23:39.181 [2024-11-06 10:15:42.609196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.181 [2024-11-06 10:15:42.638962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.441 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:39.441 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:39.441 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VKr3S3tjPH 00:23:39.441 10:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.700 [2024-11-06 10:15:43.064561] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.700 TLSTESTn1 00:23:39.700 10:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:39.960 Running I/O for 10 seconds... 00:23:41.841 5037.00 IOPS, 19.68 MiB/s [2024-11-06T09:15:46.281Z] 5779.00 IOPS, 22.57 MiB/s [2024-11-06T09:15:47.663Z] 5743.33 IOPS, 22.43 MiB/s [2024-11-06T09:15:48.603Z] 5759.50 IOPS, 22.50 MiB/s [2024-11-06T09:15:49.542Z] 5879.20 IOPS, 22.97 MiB/s [2024-11-06T09:15:50.483Z] 5984.17 IOPS, 23.38 MiB/s [2024-11-06T09:15:51.424Z] 5949.86 IOPS, 23.24 MiB/s [2024-11-06T09:15:52.364Z] 5841.38 IOPS, 22.82 MiB/s [2024-11-06T09:15:53.304Z] 5858.67 IOPS, 22.89 MiB/s [2024-11-06T09:15:53.564Z] 5887.10 IOPS, 23.00 MiB/s 00:23:50.063 Latency(us) 00:23:50.063 [2024-11-06T09:15:53.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.063 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:50.063 Verification LBA range: start 0x0 length 0x2000 00:23:50.063 TLSTESTn1 : 10.05 5872.59 22.94 0.00 0.00 21732.96 4532.91 112721.92 00:23:50.063 [2024-11-06T09:15:53.564Z] =================================================================================================================== 00:23:50.063 [2024-11-06T09:15:53.564Z] Total : 5872.59 22.94 0.00 0.00 21732.96 4532.91 112721.92 00:23:50.063 { 00:23:50.063 "results": [ 00:23:50.063 { 00:23:50.063 "job": "TLSTESTn1", 00:23:50.063 "core_mask": "0x4", 00:23:50.063 "workload": "verify", 00:23:50.063 "status": "finished", 00:23:50.063 "verify_range": { 00:23:50.063 "start": 0, 00:23:50.063 "length": 8192 00:23:50.063 }, 00:23:50.063 "queue_depth": 128, 00:23:50.063 "io_size": 4096, 00:23:50.064 "runtime": 10.046161, 00:23:50.064 "iops": 5872.591530237271, 00:23:50.064 "mibps": 22.93981066498934, 00:23:50.064 "io_failed": 0, 00:23:50.064 "io_timeout": 0, 00:23:50.064 "avg_latency_us": 21732.960223288188, 00:23:50.064 "min_latency_us": 4532.906666666667, 00:23:50.064 "max_latency_us": 112721.92 00:23:50.064 } 00:23:50.064 ], 00:23:50.064 "core_count": 1 00:23:50.064 } 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3913084 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3913084 ']' 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3913084 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3913084 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3913084' 00:23:50.064 killing process with pid 3913084 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3913084 00:23:50.064 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.064 00:23:50.064 Latency(us) 00:23:50.064 [2024-11-06T09:15:53.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.064 [2024-11-06T09:15:53.565Z] =================================================================================================================== 00:23:50.064 [2024-11-06T09:15:53.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3913084 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.VKr3S3tjPH 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VKr3S3tjPH 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VKr3S3tjPH 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VKr3S3tjPH 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VKr3S3tjPH 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3915341 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3915341 /var/tmp/bdevperf.sock 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3915341 ']' 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:50.064 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.064 [2024-11-06 10:15:53.564256] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:50.064 [2024-11-06 10:15:53.564313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915341 ] 00:23:50.324 [2024-11-06 10:15:53.627875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.324 [2024-11-06 10:15:53.656351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.324 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:50.324 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:50.324 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VKr3S3tjPH 00:23:50.584 [2024-11-06 10:15:53.925238] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VKr3S3tjPH': 0100666 00:23:50.584 [2024-11-06 10:15:53.925259] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:50.584 request: 00:23:50.584 { 00:23:50.584 "name": "key0", 00:23:50.584 "path": "/tmp/tmp.VKr3S3tjPH", 00:23:50.584 "method": "keyring_file_add_key", 00:23:50.584 "req_id": 1 00:23:50.584 } 00:23:50.584 Got JSON-RPC error response 00:23:50.584 response: 00:23:50.584 { 00:23:50.584 "code": -1, 00:23:50.584 "message": "Operation not permitted" 00:23:50.584 } 00:23:50.584 10:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.844 [2024-11-06 10:15:54.093734] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.844 [2024-11-06 10:15:54.093754] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:50.844 request: 00:23:50.844 { 00:23:50.844 "name": "TLSTEST", 00:23:50.844 "trtype": "tcp", 00:23:50.844 "traddr": "10.0.0.2", 00:23:50.844 "adrfam": "ipv4", 00:23:50.844 "trsvcid": "4420", 00:23:50.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.844 "prchk_reftag": false, 00:23:50.844 "prchk_guard": false, 00:23:50.844 "hdgst": false, 00:23:50.844 "ddgst": false, 00:23:50.844 "psk": "key0", 00:23:50.844 "allow_unrecognized_csi": false, 00:23:50.844 "method": "bdev_nvme_attach_controller", 00:23:50.844 "req_id": 1 00:23:50.844 } 00:23:50.844 Got JSON-RPC error response 00:23:50.844 response: 00:23:50.844 { 00:23:50.844 "code": -126, 00:23:50.844 "message": "Required key not available" 00:23:50.844 } 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3915341 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3915341 ']' 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3915341 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3915341 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3915341' 00:23:50.844 killing process with pid 3915341 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3915341 00:23:50.844 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.844 00:23:50.844 Latency(us) 00:23:50.844 [2024-11-06T09:15:54.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.844 [2024-11-06T09:15:54.345Z] =================================================================================================================== 00:23:50.844 [2024-11-06T09:15:54.345Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3915341 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:50.844 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.845 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:50.845 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.845 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3912651 00:23:50.845 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3912651 ']' 00:23:50.845 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3912651 00:23:50.845 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:50.845 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:50.845 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3912651 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3912651' 00:23:51.106 killing process with pid 3912651 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3912651 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3912651 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3915374 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3915374 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3915374 ']' 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:51.106 10:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.106 [2024-11-06 10:15:54.520608] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:51.106 [2024-11-06 10:15:54.520666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.367 [2024-11-06 10:15:54.619091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.367 [2024-11-06 10:15:54.649981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.367 [2024-11-06 10:15:54.650015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.367 [2024-11-06 10:15:54.650021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.367 [2024-11-06 10:15:54.650026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.367 [2024-11-06 10:15:54.650030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.367 [2024-11-06 10:15:54.650528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.937 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:51.937 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:51.937 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.937 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.937 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.938 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.938 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.VKr3S3tjPH 00:23:51.938 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:51.938 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.VKr3S3tjPH 00:23:51.938 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:51.938 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.938 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:51.938 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.938 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.VKr3S3tjPH 00:23:51.938 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VKr3S3tjPH 00:23:51.938 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:52.198 [2024-11-06 10:15:55.495808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.198 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:52.198 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:52.459 [2024-11-06 10:15:55.876748] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.459 [2024-11-06 10:15:55.876931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.459 10:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:52.719 malloc0 00:23:52.719 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:52.719 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VKr3S3tjPH 00:23:52.979 [2024-11-06 10:15:56.347799] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VKr3S3tjPH': 0100666 00:23:52.979 [2024-11-06 10:15:56.347819] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:52.979 request: 00:23:52.979 { 00:23:52.979 "name": "key0", 00:23:52.979 "path": "/tmp/tmp.VKr3S3tjPH", 00:23:52.979 "method": "keyring_file_add_key", 00:23:52.979 "req_id": 1 00:23:52.979 } 00:23:52.979 Got JSON-RPC error response 00:23:52.979 response: 00:23:52.979 { 00:23:52.979 "code": -1, 00:23:52.979 "message": "Operation not permitted" 00:23:52.979 } 00:23:52.979 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.240 [2024-11-06 10:15:56.556340] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:53.240 [2024-11-06 10:15:56.556370] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:53.240 request: 00:23:53.240 { 00:23:53.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.240 "host": "nqn.2016-06.io.spdk:host1", 00:23:53.240 "psk": "key0", 00:23:53.240 "method": "nvmf_subsystem_add_host", 00:23:53.240 "req_id": 1 00:23:53.240 } 00:23:53.240 Got JSON-RPC error response 00:23:53.240 response: 00:23:53.240 { 00:23:53.240 "code": -32603, 00:23:53.240 "message": "Internal error" 00:23:53.240 } 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3915374 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3915374 ']' 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3915374 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3915374 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3915374' 00:23:53.240 killing process with pid 3915374 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3915374 00:23:53.240 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3915374 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.VKr3S3tjPH 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3915984 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3915984 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3915984 ']' 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.501 [2024-11-06 10:15:56.813139] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:53.501 [2024-11-06 10:15:56.813194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.501 [2024-11-06 10:15:56.884368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.501 [2024-11-06 10:15:56.912876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.501 [2024-11-06 10:15:56.912907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.501 [2024-11-06 10:15:56.912913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.501 [2024-11-06 10:15:56.912918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.501 [2024-11-06 10:15:56.912922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.501 [2024-11-06 10:15:56.913372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:53.501 10:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:53.501 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.501 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.501 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.762 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.762 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.VKr3S3tjPH 00:23:53.762 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VKr3S3tjPH 00:23:53.762 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:53.762 [2024-11-06 10:15:57.195619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.762 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:54.023 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:54.023 [2024-11-06 10:15:57.516408] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.023 [2024-11-06 10:15:57.516609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.283 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:54.283 malloc0 00:23:54.283 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:54.545 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VKr3S3tjPH 00:23:54.545 10:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.805 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3916192 00:23:54.805 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.805 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.805 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3916192 /var/tmp/bdevperf.sock 00:23:54.805 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3916192 ']' 00:23:54.805 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.805 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:54.805 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.805 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:54.805 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.805 [2024-11-06 10:15:58.205720] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:54.805 [2024-11-06 10:15:58.205775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916192 ] 00:23:54.805 [2024-11-06 10:15:58.270213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.805 [2024-11-06 10:15:58.299852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.066 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:55.066 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:55.066 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VKr3S3tjPH 00:23:55.066 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.326 [2024-11-06 10:15:58.717181] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.326 TLSTESTn1 00:23:55.326 10:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:55.586 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:55.586 "subsystems": [ 00:23:55.586 { 00:23:55.586 "subsystem": "keyring", 00:23:55.586 "config": [ 00:23:55.586 { 00:23:55.586 "method": "keyring_file_add_key", 00:23:55.586 "params": { 00:23:55.586 "name": "key0", 00:23:55.586 "path": "/tmp/tmp.VKr3S3tjPH" 00:23:55.586 } 00:23:55.586 } 00:23:55.586 ] 00:23:55.586 }, 00:23:55.586 { 00:23:55.586 "subsystem": "iobuf", 00:23:55.586 "config": [ 00:23:55.586 { 00:23:55.586 "method": "iobuf_set_options", 00:23:55.586 "params": { 00:23:55.586 "small_pool_count": 8192, 00:23:55.586 "large_pool_count": 1024, 00:23:55.586 "small_bufsize": 8192, 00:23:55.586 "large_bufsize": 135168, 00:23:55.586 "enable_numa": false 00:23:55.586 } 00:23:55.586 } 00:23:55.586 ] 00:23:55.586 }, 00:23:55.586 { 00:23:55.586 "subsystem": "sock", 00:23:55.586 "config": [ 00:23:55.586 { 00:23:55.586 "method": "sock_set_default_impl", 00:23:55.586 "params": { 00:23:55.586 "impl_name": "posix" 00:23:55.586 } 00:23:55.586 }, 00:23:55.586 { 00:23:55.586 "method": "sock_impl_set_options", 00:23:55.586 "params": { 00:23:55.586 "impl_name": "ssl", 00:23:55.586 "recv_buf_size": 4096, 00:23:55.586 "send_buf_size": 4096, 00:23:55.586 "enable_recv_pipe": true, 00:23:55.586 "enable_quickack": false, 00:23:55.586 "enable_placement_id": 0, 00:23:55.586 "enable_zerocopy_send_server": true, 00:23:55.586 "enable_zerocopy_send_client": false, 00:23:55.586 "zerocopy_threshold": 0, 00:23:55.586 "tls_version": 0, 00:23:55.586 "enable_ktls": false 00:23:55.586 } 00:23:55.586 }, 00:23:55.586 { 00:23:55.586 "method": "sock_impl_set_options", 00:23:55.586 "params": { 00:23:55.586 "impl_name": "posix", 00:23:55.586 "recv_buf_size": 2097152, 00:23:55.586 "send_buf_size": 2097152, 00:23:55.586 "enable_recv_pipe": true, 00:23:55.586 "enable_quickack": false, 00:23:55.586 "enable_placement_id": 0, 00:23:55.586 "enable_zerocopy_send_server": true, 00:23:55.586 "enable_zerocopy_send_client": false, 00:23:55.586 "zerocopy_threshold": 0, 00:23:55.586 "tls_version": 0, 00:23:55.587 "enable_ktls": false 00:23:55.587 } 00:23:55.587 } 00:23:55.587 ] 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "subsystem": "vmd", 00:23:55.587 "config": [] 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "subsystem": "accel", 00:23:55.587 "config": [ 00:23:55.587 { 00:23:55.587 "method": "accel_set_options", 00:23:55.587 "params": { 00:23:55.587 "small_cache_size": 128, 00:23:55.587 "large_cache_size": 16, 00:23:55.587 "task_count": 2048, 00:23:55.587 "sequence_count": 2048, 00:23:55.587 "buf_count": 2048 00:23:55.587 } 00:23:55.587 } 00:23:55.587 ] 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "subsystem": "bdev", 00:23:55.587 "config": [ 00:23:55.587 { 00:23:55.587 "method": "bdev_set_options", 00:23:55.587 "params": { 00:23:55.587 "bdev_io_pool_size": 65535, 00:23:55.587 "bdev_io_cache_size": 256, 00:23:55.587 "bdev_auto_examine": true, 00:23:55.587 "iobuf_small_cache_size": 128, 00:23:55.587 "iobuf_large_cache_size": 16 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "bdev_raid_set_options", 00:23:55.587 "params": { 00:23:55.587 "process_window_size_kb": 1024, 00:23:55.587 "process_max_bandwidth_mb_sec": 0 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "bdev_iscsi_set_options", 00:23:55.587 "params": { 00:23:55.587 "timeout_sec": 30 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "bdev_nvme_set_options", 00:23:55.587 "params": { 00:23:55.587 "action_on_timeout": "none", 00:23:55.587 "timeout_us": 0, 00:23:55.587 "timeout_admin_us": 0, 00:23:55.587 "keep_alive_timeout_ms": 10000, 00:23:55.587 "arbitration_burst": 0, 00:23:55.587 "low_priority_weight": 0, 00:23:55.587 "medium_priority_weight": 0, 00:23:55.587 "high_priority_weight": 0, 00:23:55.587 "nvme_adminq_poll_period_us": 10000, 00:23:55.587 "nvme_ioq_poll_period_us": 0, 00:23:55.587 "io_queue_requests": 0, 00:23:55.587 "delay_cmd_submit": true, 00:23:55.587 "transport_retry_count": 4, 00:23:55.587 "bdev_retry_count": 3, 00:23:55.587 "transport_ack_timeout": 0, 00:23:55.587 "ctrlr_loss_timeout_sec": 0, 00:23:55.587 "reconnect_delay_sec": 0, 00:23:55.587 "fast_io_fail_timeout_sec": 0, 00:23:55.587 "disable_auto_failback": false, 00:23:55.587 "generate_uuids": false, 00:23:55.587 "transport_tos": 0, 00:23:55.587 "nvme_error_stat": false, 00:23:55.587 "rdma_srq_size": 0, 00:23:55.587 "io_path_stat": false, 00:23:55.587 "allow_accel_sequence": false, 00:23:55.587 "rdma_max_cq_size": 0, 00:23:55.587 "rdma_cm_event_timeout_ms": 0, 00:23:55.587 "dhchap_digests": [ 00:23:55.587 "sha256", 00:23:55.587 "sha384", 00:23:55.587 "sha512" 00:23:55.587 ], 00:23:55.587 "dhchap_dhgroups": [ 00:23:55.587 "null", 00:23:55.587 "ffdhe2048", 00:23:55.587 "ffdhe3072", 00:23:55.587 "ffdhe4096", 00:23:55.587 "ffdhe6144", 00:23:55.587 "ffdhe8192" 00:23:55.587 ] 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "bdev_nvme_set_hotplug", 00:23:55.587 "params": { 00:23:55.587 "period_us": 100000, 00:23:55.587 "enable": false 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "bdev_malloc_create", 00:23:55.587 "params": { 00:23:55.587 "name": "malloc0", 00:23:55.587 "num_blocks": 8192, 00:23:55.587 "block_size": 4096, 00:23:55.587 "physical_block_size": 4096, 00:23:55.587 "uuid": "2429dcd0-6c7d-4fc6-8aa9-f8a0ff39c951", 00:23:55.587 "optimal_io_boundary": 0, 00:23:55.587 "md_size": 0, 00:23:55.587 "dif_type": 0, 00:23:55.587 "dif_is_head_of_md": false, 00:23:55.587 "dif_pi_format": 0 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "bdev_wait_for_examine" 00:23:55.587 } 00:23:55.587 ] 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "subsystem": "nbd", 00:23:55.587 "config": [] 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "subsystem": "scheduler", 00:23:55.587 "config": [ 00:23:55.587 { 00:23:55.587 "method": "framework_set_scheduler", 00:23:55.587 "params": { 00:23:55.587 "name": "static" 00:23:55.587 } 00:23:55.587 } 00:23:55.587 ] 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "subsystem": "nvmf", 00:23:55.587 "config": [ 00:23:55.587 { 00:23:55.587 "method": "nvmf_set_config", 00:23:55.587 "params": { 00:23:55.587 "discovery_filter": "match_any", 00:23:55.587 "admin_cmd_passthru": { 00:23:55.587 "identify_ctrlr": false 00:23:55.587 }, 00:23:55.587 "dhchap_digests": [ 00:23:55.587 "sha256", 00:23:55.587 "sha384", 00:23:55.587 "sha512" 00:23:55.587 ], 00:23:55.587 "dhchap_dhgroups": [ 00:23:55.587 "null", 00:23:55.587 "ffdhe2048", 00:23:55.587 "ffdhe3072", 00:23:55.587 "ffdhe4096", 00:23:55.587 "ffdhe6144", 00:23:55.587 "ffdhe8192" 00:23:55.587 ] 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "nvmf_set_max_subsystems", 00:23:55.587 "params": { 00:23:55.587 "max_subsystems": 1024 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "nvmf_set_crdt", 00:23:55.587 "params": { 00:23:55.587 "crdt1": 0, 00:23:55.587 "crdt2": 0, 00:23:55.587 "crdt3": 0 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "nvmf_create_transport", 00:23:55.587 "params": { 00:23:55.587 "trtype": "TCP", 00:23:55.587 "max_queue_depth": 128, 00:23:55.587 "max_io_qpairs_per_ctrlr": 127, 00:23:55.587 "in_capsule_data_size": 4096, 00:23:55.587 "max_io_size": 131072, 00:23:55.587 "io_unit_size": 131072, 00:23:55.587 "max_aq_depth": 128, 00:23:55.587 "num_shared_buffers": 511, 00:23:55.587 "buf_cache_size": 4294967295, 00:23:55.587 "dif_insert_or_strip": false, 00:23:55.587 "zcopy": false, 00:23:55.587 "c2h_success": false, 00:23:55.587 "sock_priority": 0, 00:23:55.587 "abort_timeout_sec": 1, 00:23:55.587 "ack_timeout": 0, 00:23:55.587 "data_wr_pool_size": 0 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "nvmf_create_subsystem", 00:23:55.587 "params": { 00:23:55.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.587 "allow_any_host": false, 00:23:55.587 "serial_number": "SPDK00000000000001", 00:23:55.587 "model_number": "SPDK bdev Controller", 00:23:55.587 "max_namespaces": 10, 00:23:55.587 "min_cntlid": 1, 00:23:55.587 "max_cntlid": 65519, 00:23:55.587 "ana_reporting": false 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "nvmf_subsystem_add_host", 00:23:55.587 "params": { 00:23:55.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.587 "host": "nqn.2016-06.io.spdk:host1", 00:23:55.587 "psk": "key0" 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "nvmf_subsystem_add_ns", 00:23:55.587 "params": { 00:23:55.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.587 "namespace": { 00:23:55.587 "nsid": 1, 00:23:55.587 "bdev_name": "malloc0", 00:23:55.587 "nguid": "2429DCD06C7D4FC68AA9F8A0FF39C951", 00:23:55.587 "uuid": "2429dcd0-6c7d-4fc6-8aa9-f8a0ff39c951", 00:23:55.587 "no_auto_visible": false 00:23:55.587 } 00:23:55.587 } 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "method": "nvmf_subsystem_add_listener", 00:23:55.587 "params": { 00:23:55.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.587 "listen_address": { 00:23:55.587 "trtype": "TCP", 00:23:55.587 "adrfam": "IPv4", 00:23:55.587 "traddr": "10.0.0.2", 00:23:55.587 "trsvcid": "4420" 00:23:55.587 }, 00:23:55.587 "secure_channel": true 00:23:55.587 } 00:23:55.587 } 00:23:55.587 ] 00:23:55.587 } 00:23:55.587 ] 00:23:55.587 }' 00:23:55.588 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:55.848 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:55.848 "subsystems": [ 00:23:55.848 { 00:23:55.848 "subsystem": "keyring", 00:23:55.848 "config": [ 00:23:55.848 { 00:23:55.848 "method": "keyring_file_add_key", 00:23:55.848 "params": { 00:23:55.848 "name": "key0", 00:23:55.848 "path": "/tmp/tmp.VKr3S3tjPH" 00:23:55.848 } 00:23:55.848 } 00:23:55.848 ] 00:23:55.848 }, 00:23:55.848 { 00:23:55.848 "subsystem": "iobuf", 00:23:55.848 "config": [ 00:23:55.848 { 00:23:55.848 "method": "iobuf_set_options", 00:23:55.848 "params": { 00:23:55.848 "small_pool_count": 8192, 00:23:55.848 "large_pool_count": 1024, 00:23:55.848 "small_bufsize": 8192, 00:23:55.848 "large_bufsize": 135168, 00:23:55.848 "enable_numa": false 00:23:55.848 } 00:23:55.848 } 00:23:55.848 ] 00:23:55.848 }, 00:23:55.848 { 00:23:55.848 "subsystem": "sock", 00:23:55.848 "config": [ 00:23:55.848 { 00:23:55.848 "method": "sock_set_default_impl", 00:23:55.848 "params": { 00:23:55.848 "impl_name": "posix" 00:23:55.848 } 00:23:55.848 }, 00:23:55.848 { 00:23:55.848 "method": "sock_impl_set_options", 00:23:55.848 "params": { 00:23:55.848 "impl_name": "ssl", 00:23:55.848 "recv_buf_size": 4096, 00:23:55.848 "send_buf_size": 4096, 00:23:55.848 "enable_recv_pipe": true, 00:23:55.848 "enable_quickack": false, 00:23:55.848 "enable_placement_id": 0, 00:23:55.848 "enable_zerocopy_send_server": true, 00:23:55.848 "enable_zerocopy_send_client": false, 00:23:55.848 "zerocopy_threshold": 0, 00:23:55.848 "tls_version": 0, 00:23:55.848 "enable_ktls": false 00:23:55.848 } 00:23:55.848 }, 00:23:55.848 { 00:23:55.848 "method": "sock_impl_set_options", 00:23:55.848 "params": { 00:23:55.848 "impl_name": "posix", 00:23:55.848 "recv_buf_size": 2097152, 00:23:55.848 "send_buf_size": 2097152, 00:23:55.848 "enable_recv_pipe": true, 00:23:55.848 "enable_quickack": false, 00:23:55.848 "enable_placement_id": 0, 00:23:55.848 "enable_zerocopy_send_server": true, 00:23:55.848 "enable_zerocopy_send_client": false, 00:23:55.848 "zerocopy_threshold": 0, 00:23:55.848 "tls_version": 0, 00:23:55.848 "enable_ktls": false 00:23:55.848 } 00:23:55.848 } 00:23:55.848 ] 00:23:55.848 }, 00:23:55.848 { 00:23:55.848 "subsystem": "vmd", 00:23:55.848 "config": [] 00:23:55.848 }, 00:23:55.848 { 00:23:55.848 "subsystem": "accel", 00:23:55.848 "config": [ 00:23:55.848 { 00:23:55.848 "method": "accel_set_options", 00:23:55.848 "params": { 00:23:55.848 "small_cache_size": 128, 00:23:55.848 "large_cache_size": 16, 00:23:55.848 "task_count": 2048, 00:23:55.848 "sequence_count": 2048, 00:23:55.848 "buf_count": 2048 00:23:55.848 } 00:23:55.848 } 00:23:55.848 ] 00:23:55.848 }, 00:23:55.848 { 00:23:55.848 "subsystem": "bdev", 00:23:55.848 "config": [ 00:23:55.848 { 00:23:55.848 "method": "bdev_set_options", 00:23:55.848 "params": { 00:23:55.849 "bdev_io_pool_size": 65535, 00:23:55.849 "bdev_io_cache_size": 256, 00:23:55.849 "bdev_auto_examine": true, 00:23:55.849 "iobuf_small_cache_size": 128, 00:23:55.849 "iobuf_large_cache_size": 16 00:23:55.849 } 00:23:55.849 }, 00:23:55.849 { 00:23:55.849 "method": "bdev_raid_set_options", 00:23:55.849 "params": { 00:23:55.849 "process_window_size_kb": 1024, 00:23:55.849 "process_max_bandwidth_mb_sec": 0 00:23:55.849 } 00:23:55.849 }, 00:23:55.849 { 00:23:55.849 "method": "bdev_iscsi_set_options", 00:23:55.849 "params": { 00:23:55.849 "timeout_sec": 30 00:23:55.849 } 00:23:55.849 }, 00:23:55.849 { 00:23:55.849 "method": "bdev_nvme_set_options", 00:23:55.849 "params": { 00:23:55.849 "action_on_timeout": "none", 00:23:55.849 "timeout_us": 0, 00:23:55.849 "timeout_admin_us": 0, 00:23:55.849 "keep_alive_timeout_ms": 10000, 00:23:55.849 "arbitration_burst": 0, 00:23:55.849 "low_priority_weight": 0, 00:23:55.849 "medium_priority_weight": 0, 00:23:55.849 "high_priority_weight": 0, 00:23:55.849 "nvme_adminq_poll_period_us": 10000, 00:23:55.849 "nvme_ioq_poll_period_us": 0, 00:23:55.849 "io_queue_requests": 512, 00:23:55.849 "delay_cmd_submit": true, 00:23:55.849 "transport_retry_count": 4, 00:23:55.849 "bdev_retry_count": 3, 00:23:55.849 "transport_ack_timeout": 0, 00:23:55.849 "ctrlr_loss_timeout_sec": 0, 00:23:55.849 "reconnect_delay_sec": 0, 00:23:55.849 "fast_io_fail_timeout_sec": 0, 00:23:55.849 "disable_auto_failback": false, 00:23:55.849 "generate_uuids": false, 00:23:55.849 "transport_tos": 0, 00:23:55.849 "nvme_error_stat": false, 00:23:55.849 "rdma_srq_size": 0, 00:23:55.849 "io_path_stat": false, 00:23:55.849 "allow_accel_sequence": false, 00:23:55.849 "rdma_max_cq_size": 0, 00:23:55.849 "rdma_cm_event_timeout_ms": 0, 00:23:55.849 "dhchap_digests": [ 00:23:55.849 "sha256", 00:23:55.849 "sha384", 00:23:55.849 "sha512" 00:23:55.849 ], 00:23:55.849 "dhchap_dhgroups": [ 00:23:55.849 "null", 00:23:55.849 "ffdhe2048", 00:23:55.849 "ffdhe3072", 00:23:55.849 "ffdhe4096", 00:23:55.849 "ffdhe6144", 00:23:55.849 "ffdhe8192" 00:23:55.849 ] 00:23:55.849 } 00:23:55.849 }, 00:23:55.849 { 00:23:55.849 "method": "bdev_nvme_attach_controller", 00:23:55.849 "params": { 00:23:55.849 "name": "TLSTEST", 00:23:55.849 "trtype": "TCP", 00:23:55.849 "adrfam": "IPv4", 00:23:55.849 "traddr": "10.0.0.2", 00:23:55.849 "trsvcid": "4420", 00:23:55.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.849 "prchk_reftag": false, 00:23:55.849 "prchk_guard": false, 00:23:55.849 "ctrlr_loss_timeout_sec": 0, 00:23:55.849 "reconnect_delay_sec": 0, 00:23:55.849 "fast_io_fail_timeout_sec": 0, 00:23:55.849 "psk": "key0", 00:23:55.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.849 "hdgst": false, 00:23:55.849 "ddgst": false, 00:23:55.849 "multipath": "multipath" 00:23:55.849 } 00:23:55.849 }, 00:23:55.849 { 00:23:55.849 "method": "bdev_nvme_set_hotplug", 00:23:55.849 "params": { 00:23:55.849 "period_us": 100000, 00:23:55.849 "enable": false 00:23:55.849 } 00:23:55.849 }, 00:23:55.849 { 00:23:55.849 "method": "bdev_wait_for_examine" 00:23:55.849 } 00:23:55.849 ] 00:23:55.849 }, 00:23:55.849 { 00:23:55.849 "subsystem": "nbd", 00:23:55.849 "config": [] 00:23:55.849 } 00:23:55.849 ] 00:23:55.849 }' 00:23:55.849 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3916192 00:23:55.849 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3916192 ']' 00:23:55.849 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3916192 00:23:55.849 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:55.849 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:55.849 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3916192 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3916192' 00:23:56.109 killing process with pid 3916192 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3916192 00:23:56.109 Received shutdown signal, test time was about 10.000000 seconds 00:23:56.109 00:23:56.109 Latency(us) 00:23:56.109 [2024-11-06T09:15:59.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.109 [2024-11-06T09:15:59.610Z] =================================================================================================================== 00:23:56.109 [2024-11-06T09:15:59.610Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3916192 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3915984 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3915984 ']' 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3915984 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3915984 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3915984' 00:23:56.109 killing process with pid 3915984 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3915984 00:23:56.109 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3915984 00:23:56.370 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:56.370 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.370 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.370 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.370 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:56.370 "subsystems": [ 00:23:56.370 { 00:23:56.370 "subsystem": "keyring", 00:23:56.370 "config": [ 00:23:56.370 { 00:23:56.370 "method": "keyring_file_add_key", 00:23:56.370 "params": { 00:23:56.370 "name": "key0", 00:23:56.370 "path": "/tmp/tmp.VKr3S3tjPH" 00:23:56.370 } 00:23:56.370 } 00:23:56.370 ] 00:23:56.370 }, 00:23:56.370 { 00:23:56.370 "subsystem": "iobuf", 00:23:56.370 "config": [ 00:23:56.370 { 00:23:56.370 "method": "iobuf_set_options", 00:23:56.370 "params": { 00:23:56.370 "small_pool_count": 8192, 00:23:56.370 "large_pool_count": 1024, 00:23:56.370 "small_bufsize": 8192, 00:23:56.370 "large_bufsize": 135168, 00:23:56.370 "enable_numa": false 00:23:56.370 } 00:23:56.370 } 00:23:56.370 ] 00:23:56.370 }, 00:23:56.370 { 00:23:56.370 "subsystem": "sock", 00:23:56.370 "config": [ 00:23:56.370 { 00:23:56.370 "method": "sock_set_default_impl", 00:23:56.370 "params": { 00:23:56.370 "impl_name": "posix" 00:23:56.370 } 00:23:56.370 }, 00:23:56.370 { 00:23:56.370 "method": "sock_impl_set_options", 00:23:56.370 "params": { 00:23:56.370 "impl_name": "ssl", 00:23:56.370 "recv_buf_size": 4096, 00:23:56.370 "send_buf_size": 4096, 00:23:56.370 "enable_recv_pipe": true, 00:23:56.370 "enable_quickack": false, 00:23:56.370 "enable_placement_id": 0, 00:23:56.370 "enable_zerocopy_send_server": true, 00:23:56.370 "enable_zerocopy_send_client": false, 00:23:56.370 "zerocopy_threshold": 0, 00:23:56.370 "tls_version": 0, 00:23:56.370 "enable_ktls": false 00:23:56.370 } 00:23:56.370 }, 00:23:56.370 { 00:23:56.370 "method": "sock_impl_set_options", 00:23:56.370 "params": { 00:23:56.370 "impl_name": "posix", 00:23:56.370 "recv_buf_size": 2097152, 00:23:56.370 "send_buf_size": 2097152, 00:23:56.370 "enable_recv_pipe": true, 00:23:56.370 "enable_quickack": false, 00:23:56.370 "enable_placement_id": 0, 00:23:56.370 "enable_zerocopy_send_server": true, 00:23:56.370 "enable_zerocopy_send_client": false, 00:23:56.370 "zerocopy_threshold": 0, 00:23:56.370 "tls_version": 0, 00:23:56.370 "enable_ktls": false 00:23:56.370 } 00:23:56.370 } 00:23:56.370 ] 00:23:56.370 }, 00:23:56.370 { 00:23:56.370 "subsystem": "vmd", 00:23:56.370 "config": [] 00:23:56.370 }, 00:23:56.370 { 00:23:56.370 "subsystem": "accel", 00:23:56.370 "config": [ 00:23:56.370 { 00:23:56.370 "method": "accel_set_options", 00:23:56.370 "params": { 00:23:56.370 "small_cache_size": 128, 00:23:56.370 "large_cache_size": 16, 00:23:56.370 "task_count": 2048, 00:23:56.370 "sequence_count": 2048, 00:23:56.370 "buf_count": 2048 00:23:56.370 } 00:23:56.370 } 00:23:56.370 ] 00:23:56.370 }, 00:23:56.370 { 00:23:56.370 "subsystem": "bdev", 00:23:56.370 "config": [ 00:23:56.370 { 00:23:56.370 "method": "bdev_set_options", 00:23:56.370 "params": { 00:23:56.370 "bdev_io_pool_size": 65535, 00:23:56.370 "bdev_io_cache_size": 256, 00:23:56.370 "bdev_auto_examine": true, 00:23:56.370 "iobuf_small_cache_size": 128, 00:23:56.370 "iobuf_large_cache_size": 16 00:23:56.370 } 00:23:56.370 }, 00:23:56.370 { 00:23:56.370 "method": "bdev_raid_set_options", 00:23:56.370 "params": { 00:23:56.370 "process_window_size_kb": 1024, 00:23:56.370 "process_max_bandwidth_mb_sec": 0 00:23:56.370 } 00:23:56.370 }, 00:23:56.370 { 00:23:56.370 "method": "bdev_iscsi_set_options", 00:23:56.370 "params": { 00:23:56.370 "timeout_sec": 30 00:23:56.370 } 00:23:56.370 }, 00:23:56.370 { 00:23:56.370 "method": "bdev_nvme_set_options", 00:23:56.370 "params": { 00:23:56.370 "action_on_timeout": "none", 00:23:56.370 "timeout_us": 0, 00:23:56.370 "timeout_admin_us": 0, 00:23:56.370 "keep_alive_timeout_ms": 10000, 00:23:56.370 "arbitration_burst": 0, 00:23:56.370 "low_priority_weight": 0, 00:23:56.370 "medium_priority_weight": 0, 00:23:56.370 "high_priority_weight": 0, 00:23:56.370 "nvme_adminq_poll_period_us": 10000, 00:23:56.370 "nvme_ioq_poll_period_us": 0, 00:23:56.370 "io_queue_requests": 0, 00:23:56.370 "delay_cmd_submit": true, 00:23:56.370 "transport_retry_count": 4, 00:23:56.370 "bdev_retry_count": 3, 00:23:56.370 "transport_ack_timeout": 0, 00:23:56.370 "ctrlr_loss_timeout_sec": 0, 00:23:56.370 "reconnect_delay_sec": 0, 00:23:56.371 "fast_io_fail_timeout_sec": 0, 00:23:56.371 "disable_auto_failback": false, 00:23:56.371 "generate_uuids": false, 00:23:56.371 "transport_tos": 0, 00:23:56.371 "nvme_error_stat": false, 00:23:56.371 "rdma_srq_size": 0, 00:23:56.371 "io_path_stat": false, 00:23:56.371 "allow_accel_sequence": false, 00:23:56.371 "rdma_max_cq_size": 0, 00:23:56.371 "rdma_cm_event_timeout_ms": 0, 00:23:56.371 "dhchap_digests": [ 00:23:56.371 "sha256", 00:23:56.371 "sha384", 00:23:56.371 "sha512" 00:23:56.371 ], 00:23:56.371 "dhchap_dhgroups": [ 00:23:56.371 "null", 00:23:56.371 "ffdhe2048", 00:23:56.371 "ffdhe3072", 00:23:56.371 "ffdhe4096", 00:23:56.371 "ffdhe6144", 00:23:56.371 "ffdhe8192" 00:23:56.371 ] 00:23:56.371 } 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "method": "bdev_nvme_set_hotplug", 00:23:56.371 "params": { 00:23:56.371 "period_us": 100000, 00:23:56.371 "enable": false 00:23:56.371 } 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "method": "bdev_malloc_create", 00:23:56.371 "params": { 00:23:56.371 "name": "malloc0", 00:23:56.371 "num_blocks": 8192, 00:23:56.371 "block_size": 4096, 00:23:56.371 "physical_block_size": 4096, 00:23:56.371 "uuid": "2429dcd0-6c7d-4fc6-8aa9-f8a0ff39c951", 00:23:56.371 "optimal_io_boundary": 0, 00:23:56.371 "md_size": 0, 00:23:56.371 "dif_type": 0, 00:23:56.371 "dif_is_head_of_md": false, 00:23:56.371 "dif_pi_format": 0 00:23:56.371 } 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "method": "bdev_wait_for_examine" 00:23:56.371 } 00:23:56.371 ] 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "subsystem": "nbd", 00:23:56.371 "config": [] 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "subsystem": "scheduler", 00:23:56.371 "config": [ 00:23:56.371 { 00:23:56.371 "method": "framework_set_scheduler", 00:23:56.371 "params": { 00:23:56.371 "name": "static" 00:23:56.371 } 00:23:56.371 } 00:23:56.371 ] 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "subsystem": "nvmf", 00:23:56.371 "config": [ 00:23:56.371 { 00:23:56.371 "method": "nvmf_set_config", 00:23:56.371 "params": { 00:23:56.371 "discovery_filter": "match_any", 00:23:56.371 "admin_cmd_passthru": { 00:23:56.371 "identify_ctrlr": false 00:23:56.371 }, 00:23:56.371 "dhchap_digests": [ 00:23:56.371 "sha256", 00:23:56.371 "sha384", 00:23:56.371 "sha512" 00:23:56.371 ], 00:23:56.371 "dhchap_dhgroups": [ 00:23:56.371 "null", 00:23:56.371 "ffdhe2048", 00:23:56.371 "ffdhe3072", 00:23:56.371 "ffdhe4096", 00:23:56.371 "ffdhe6144", 00:23:56.371 "ffdhe8192" 00:23:56.371 ] 00:23:56.371 } 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "method": "nvmf_set_max_subsystems", 00:23:56.371 "params": { 00:23:56.371 "max_subsystems": 1024 00:23:56.371 } 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "method": "nvmf_set_crdt", 00:23:56.371 "params": { 00:23:56.371 "crdt1": 0, 00:23:56.371 "crdt2": 0, 00:23:56.371 "crdt3": 0 00:23:56.371 } 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "method": "nvmf_create_transport", 00:23:56.371 "params": { 00:23:56.371 "trtype": "TCP", 00:23:56.371 "max_queue_depth": 128, 00:23:56.371 "max_io_qpairs_per_ctrlr": 127, 00:23:56.371 "in_capsule_data_size": 4096, 00:23:56.371 "max_io_size": 131072, 00:23:56.371 "io_unit_size": 131072, 00:23:56.371 "max_aq_depth": 128, 00:23:56.371 "num_shared_buffers": 511, 00:23:56.371 "buf_cache_size": 4294967295, 00:23:56.371 "dif_insert_or_strip": false, 00:23:56.371 "zcopy": false, 00:23:56.371 "c2h_success": false, 00:23:56.371 "sock_priority": 0, 00:23:56.371 "abort_timeout_sec": 1, 00:23:56.371 "ack_timeout": 0, 00:23:56.371 "data_wr_pool_size": 0 00:23:56.371 } 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "method": "nvmf_create_subsystem", 00:23:56.371 "params": { 00:23:56.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.371 "allow_any_host": false, 00:23:56.371 "serial_number": "SPDK00000000000001", 00:23:56.371 "model_number": "SPDK bdev Controller", 00:23:56.371 "max_namespaces": 10, 00:23:56.371 "min_cntlid": 1, 00:23:56.371 "max_cntlid": 65519, 00:23:56.371 "ana_reporting": false 00:23:56.371 } 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "method": "nvmf_subsystem_add_host", 00:23:56.371 "params": { 00:23:56.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.371 "host": "nqn.2016-06.io.spdk:host1", 00:23:56.371 "psk": "key0" 00:23:56.371 } 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "method": "nvmf_subsystem_add_ns", 00:23:56.371 "params": { 00:23:56.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.371 "namespace": { 00:23:56.371 "nsid": 1, 00:23:56.371 "bdev_name": "malloc0", 00:23:56.371 "nguid": "2429DCD06C7D4FC68AA9F8A0FF39C951", 00:23:56.371 "uuid": "2429dcd0-6c7d-4fc6-8aa9-f8a0ff39c951", 00:23:56.371 "no_auto_visible": false 00:23:56.371 } 00:23:56.371 } 00:23:56.371 }, 00:23:56.371 { 00:23:56.371 "method": "nvmf_subsystem_add_listener", 00:23:56.371 "params": { 00:23:56.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.371 "listen_address": { 00:23:56.371 "trtype": "TCP", 00:23:56.371 "adrfam": "IPv4", 00:23:56.371 "traddr": "10.0.0.2", 00:23:56.371 "trsvcid": "4420" 00:23:56.371 }, 00:23:56.371 "secure_channel": true 00:23:56.371 } 00:23:56.371 } 00:23:56.371 ] 00:23:56.371 } 00:23:56.371 ] 00:23:56.371 }' 00:23:56.371 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3916459 00:23:56.371 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3916459 00:23:56.371 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:56.371 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3916459 ']' 00:23:56.371 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.371 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:56.371 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.371 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:56.371 10:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.371 [2024-11-06 10:15:59.712760] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:56.371 [2024-11-06 10:15:59.712814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.371 [2024-11-06 10:15:59.812586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.371 [2024-11-06 10:15:59.840797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.371 [2024-11-06 10:15:59.840830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.371 [2024-11-06 10:15:59.840836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.371 [2024-11-06 10:15:59.840841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.371 [2024-11-06 10:15:59.840845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.371 [2024-11-06 10:15:59.841380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.632 [2024-11-06 10:16:00.034950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.632 [2024-11-06 10:16:00.066971] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:56.632 [2024-11-06 10:16:00.067165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3916798 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3916798 /var/tmp/bdevperf.sock 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3916798 ']' 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.206 10:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:57.206 "subsystems": [ 00:23:57.206 { 00:23:57.206 "subsystem": "keyring", 00:23:57.206 "config": [ 00:23:57.206 { 00:23:57.206 "method": "keyring_file_add_key", 00:23:57.206 "params": { 00:23:57.206 "name": "key0", 00:23:57.206 "path": "/tmp/tmp.VKr3S3tjPH" 00:23:57.206 } 00:23:57.206 } 00:23:57.206 ] 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "subsystem": "iobuf", 00:23:57.206 "config": [ 00:23:57.206 { 00:23:57.206 "method": "iobuf_set_options", 00:23:57.206 "params": { 00:23:57.206 "small_pool_count": 8192, 00:23:57.206 "large_pool_count": 1024, 00:23:57.206 "small_bufsize": 8192, 00:23:57.206 "large_bufsize": 135168, 00:23:57.206 "enable_numa": false 00:23:57.206 } 00:23:57.206 } 00:23:57.206 ] 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "subsystem": "sock", 00:23:57.206 "config": [ 00:23:57.206 { 00:23:57.206 "method": "sock_set_default_impl", 00:23:57.206 "params": { 00:23:57.206 "impl_name": "posix" 00:23:57.206 } 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "method": "sock_impl_set_options", 00:23:57.206 "params": { 00:23:57.206 "impl_name": "ssl", 00:23:57.206 "recv_buf_size": 4096, 00:23:57.206 "send_buf_size": 4096, 00:23:57.206 "enable_recv_pipe": true, 00:23:57.206 "enable_quickack": false, 00:23:57.206 "enable_placement_id": 0, 00:23:57.206 "enable_zerocopy_send_server": true, 00:23:57.206 "enable_zerocopy_send_client": false, 00:23:57.206 "zerocopy_threshold": 0, 00:23:57.206 "tls_version": 0, 00:23:57.206 "enable_ktls": false 00:23:57.206 } 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "method": "sock_impl_set_options", 00:23:57.206 "params": { 00:23:57.206 "impl_name": "posix", 00:23:57.206 "recv_buf_size": 2097152, 00:23:57.206 "send_buf_size": 2097152, 00:23:57.206 "enable_recv_pipe": true, 00:23:57.206 "enable_quickack": false, 00:23:57.206 "enable_placement_id": 0, 00:23:57.206 "enable_zerocopy_send_server": true, 00:23:57.206 "enable_zerocopy_send_client": false, 00:23:57.206 "zerocopy_threshold": 0, 00:23:57.206 "tls_version": 0, 00:23:57.206 "enable_ktls": false 00:23:57.206 } 00:23:57.206 } 00:23:57.206 ] 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "subsystem": "vmd", 00:23:57.206 "config": [] 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "subsystem": "accel", 00:23:57.206 "config": [ 00:23:57.206 { 00:23:57.206 "method": "accel_set_options", 00:23:57.206 "params": { 00:23:57.206 "small_cache_size": 128, 00:23:57.206 "large_cache_size": 16, 00:23:57.206 "task_count": 2048, 00:23:57.206 "sequence_count": 2048, 00:23:57.206 "buf_count": 2048 00:23:57.206 } 00:23:57.206 } 00:23:57.206 ] 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "subsystem": "bdev", 00:23:57.206 "config": [ 00:23:57.206 { 00:23:57.206 "method": "bdev_set_options", 00:23:57.206 "params": { 00:23:57.206 "bdev_io_pool_size": 65535, 00:23:57.206 "bdev_io_cache_size": 256, 00:23:57.206 "bdev_auto_examine": true, 00:23:57.206 "iobuf_small_cache_size": 128, 00:23:57.206 "iobuf_large_cache_size": 16 00:23:57.206 } 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "method": "bdev_raid_set_options", 00:23:57.206 "params": { 00:23:57.206 "process_window_size_kb": 1024, 00:23:57.206 "process_max_bandwidth_mb_sec": 0 00:23:57.206 } 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "method": "bdev_iscsi_set_options", 00:23:57.206 "params": { 00:23:57.206 "timeout_sec": 30 00:23:57.206 } 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "method": "bdev_nvme_set_options", 00:23:57.206 "params": { 00:23:57.206 "action_on_timeout": "none", 00:23:57.206 "timeout_us": 0, 00:23:57.206 "timeout_admin_us": 0, 00:23:57.206 "keep_alive_timeout_ms": 10000, 00:23:57.206 "arbitration_burst": 0, 00:23:57.206 "low_priority_weight": 0, 00:23:57.206 "medium_priority_weight": 0, 00:23:57.206 "high_priority_weight": 0, 00:23:57.206 "nvme_adminq_poll_period_us": 10000, 00:23:57.206 "nvme_ioq_poll_period_us": 0, 00:23:57.206 "io_queue_requests": 512, 00:23:57.206 "delay_cmd_submit": true, 00:23:57.206 "transport_retry_count": 4, 00:23:57.206 "bdev_retry_count": 3, 00:23:57.206 "transport_ack_timeout": 0, 00:23:57.206 "ctrlr_loss_timeout_sec": 0, 00:23:57.206 "reconnect_delay_sec": 0, 00:23:57.206 "fast_io_fail_timeout_sec": 0, 00:23:57.206 "disable_auto_failback": false, 00:23:57.206 "generate_uuids": false, 00:23:57.206 "transport_tos": 0, 00:23:57.206 "nvme_error_stat": false, 00:23:57.206 "rdma_srq_size": 0, 00:23:57.206 "io_path_stat": false, 00:23:57.206 "allow_accel_sequence": false, 00:23:57.206 "rdma_max_cq_size": 0, 00:23:57.206 "rdma_cm_event_timeout_ms": 0, 00:23:57.206 "dhchap_digests": [ 00:23:57.206 "sha256", 00:23:57.206 "sha384", 00:23:57.206 "sha512" 00:23:57.206 ], 00:23:57.206 "dhchap_dhgroups": [ 00:23:57.206 "null", 00:23:57.206 "ffdhe2048", 00:23:57.206 "ffdhe3072", 00:23:57.206 "ffdhe4096", 00:23:57.206 "ffdhe6144", 00:23:57.206 "ffdhe8192" 00:23:57.206 ] 00:23:57.206 } 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "method": "bdev_nvme_attach_controller", 00:23:57.206 "params": { 00:23:57.206 "name": "TLSTEST", 00:23:57.206 "trtype": "TCP", 00:23:57.206 "adrfam": "IPv4", 00:23:57.206 "traddr": "10.0.0.2", 00:23:57.206 "trsvcid": "4420", 00:23:57.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.206 "prchk_reftag": false, 00:23:57.206 "prchk_guard": false, 00:23:57.206 "ctrlr_loss_timeout_sec": 0, 00:23:57.206 "reconnect_delay_sec": 0, 00:23:57.206 "fast_io_fail_timeout_sec": 0, 00:23:57.206 "psk": "key0", 00:23:57.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.206 "hdgst": false, 00:23:57.206 "ddgst": false, 00:23:57.206 "multipath": "multipath" 00:23:57.206 } 00:23:57.206 }, 00:23:57.206 { 00:23:57.206 "method": "bdev_nvme_set_hotplug", 00:23:57.206 "params": { 00:23:57.206 "period_us": 100000, 00:23:57.206 "enable": false 00:23:57.207 } 00:23:57.207 }, 00:23:57.207 { 00:23:57.207 "method": "bdev_wait_for_examine" 00:23:57.207 } 00:23:57.207 ] 00:23:57.207 }, 00:23:57.207 { 00:23:57.207 "subsystem": "nbd", 00:23:57.207 "config": [] 00:23:57.207 } 00:23:57.207 ] 00:23:57.207 }' 00:23:57.207 [2024-11-06 10:16:00.588346] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:57.207 [2024-11-06 10:16:00.588397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916798 ] 00:23:57.207 [2024-11-06 10:16:00.652520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.207 [2024-11-06 10:16:00.681484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.468 [2024-11-06 10:16:00.815682] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.040 10:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:58.040 10:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:58.040 10:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:58.040 Running I/O for 10 seconds... 00:24:00.366 4999.00 IOPS, 19.53 MiB/s [2024-11-06T09:16:04.808Z] 5544.50 IOPS, 21.66 MiB/s [2024-11-06T09:16:05.750Z] 5633.00 IOPS, 22.00 MiB/s [2024-11-06T09:16:06.690Z] 5725.50 IOPS, 22.37 MiB/s [2024-11-06T09:16:07.633Z] 5641.20 IOPS, 22.04 MiB/s [2024-11-06T09:16:08.575Z] 5530.50 IOPS, 21.60 MiB/s [2024-11-06T09:16:09.957Z] 5407.71 IOPS, 21.12 MiB/s [2024-11-06T09:16:10.898Z] 5511.38 IOPS, 21.53 MiB/s [2024-11-06T09:16:11.839Z] 5433.22 IOPS, 21.22 MiB/s [2024-11-06T09:16:11.839Z] 5513.70 IOPS, 21.54 MiB/s 00:24:08.338 Latency(us) 00:24:08.338 [2024-11-06T09:16:11.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.338 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:08.338 Verification LBA range: start 0x0 length 0x2000 00:24:08.338 TLSTESTn1 : 10.01 5518.88 21.56 0.00 0.00 23161.30 4560.21 23374.51 00:24:08.338 [2024-11-06T09:16:11.839Z] =================================================================================================================== 00:24:08.338 [2024-11-06T09:16:11.839Z] Total : 5518.88 21.56 0.00 0.00 23161.30 4560.21 23374.51 00:24:08.338 { 00:24:08.338 "results": [ 00:24:08.338 { 00:24:08.338 "job": "TLSTESTn1", 00:24:08.338 "core_mask": "0x4", 00:24:08.338 "workload": "verify", 00:24:08.338 "status": "finished", 00:24:08.338 "verify_range": { 00:24:08.338 "start": 0, 00:24:08.338 "length": 8192 00:24:08.338 }, 00:24:08.338 "queue_depth": 128, 00:24:08.338 "io_size": 4096, 00:24:08.338 "runtime": 10.013632, 00:24:08.338 "iops": 5518.876667327099, 00:24:08.338 "mibps": 21.55811198174648, 00:24:08.338 "io_failed": 0, 00:24:08.338 "io_timeout": 0, 00:24:08.338 "avg_latency_us": 23161.3032850801, 00:24:08.338 "min_latency_us": 4560.213333333333, 00:24:08.338 "max_latency_us": 23374.506666666668 00:24:08.338 } 00:24:08.338 ], 00:24:08.338 "core_count": 1 00:24:08.338 } 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3916798 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3916798 ']' 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3916798 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3916798 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3916798' 00:24:08.338 killing process with pid 3916798 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3916798 00:24:08.338 Received shutdown signal, test time was about 10.000000 seconds 00:24:08.338 00:24:08.338 Latency(us) 00:24:08.338 [2024-11-06T09:16:11.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.338 [2024-11-06T09:16:11.839Z] =================================================================================================================== 00:24:08.338 [2024-11-06T09:16:11.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3916798 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3916459 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3916459 ']' 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3916459 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3916459 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3916459' 00:24:08.338 killing process with pid 3916459 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3916459 00:24:08.338 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3916459 00:24:08.599 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:08.599 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:08.599 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:08.600 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.600 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3918853 00:24:08.600 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:08.600 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3918853 00:24:08.600 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3918853 ']' 00:24:08.600 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.600 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:08.600 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.600 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:08.600 10:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.600 [2024-11-06 10:16:11.982174] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:08.600 [2024-11-06 10:16:11.982225] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.600 [2024-11-06 10:16:12.068418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.861 [2024-11-06 10:16:12.102710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.861 [2024-11-06 10:16:12.102747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.861 [2024-11-06 10:16:12.102755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.861 [2024-11-06 10:16:12.102762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.861 [2024-11-06 10:16:12.102767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.861 [2024-11-06 10:16:12.103342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.434 10:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:09.434 10:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:09.434 10:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:09.434 10:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:09.434 10:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.434 10:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.434 10:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.VKr3S3tjPH 00:24:09.434 10:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VKr3S3tjPH 00:24:09.434 10:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:09.694 [2024-11-06 10:16:12.979996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.694 10:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:09.694 10:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:09.955 [2024-11-06 10:16:13.324860] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.955 [2024-11-06 10:16:13.325069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.955 10:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:10.215 malloc0 00:24:10.215 10:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:10.215 10:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VKr3S3tjPH 00:24:10.477 10:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:10.741 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3919422 00:24:10.741 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:10.741 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:10.741 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3919422 /var/tmp/bdevperf.sock 00:24:10.741 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3919422 ']' 00:24:10.741 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.741 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:10.741 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.741 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:10.741 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.741 [2024-11-06 10:16:14.118622] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:10.741 [2024-11-06 10:16:14.118675] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919422 ] 00:24:10.741 [2024-11-06 10:16:14.209279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.073 [2024-11-06 10:16:14.239359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.711 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:11.711 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:11.711 10:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VKr3S3tjPH 00:24:11.711 10:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:11.711 [2024-11-06 10:16:15.199119] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.972 nvme0n1 00:24:11.972 10:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:11.972 Running I/O for 1 seconds... 00:24:12.913 5526.00 IOPS, 21.59 MiB/s 00:24:12.913 Latency(us) 00:24:12.913 [2024-11-06T09:16:16.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.913 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:12.913 Verification LBA range: start 0x0 length 0x2000 00:24:12.913 nvme0n1 : 1.03 5494.38 21.46 0.00 0.00 22955.86 7918.93 36263.25 00:24:12.913 [2024-11-06T09:16:16.414Z] =================================================================================================================== 00:24:12.913 [2024-11-06T09:16:16.414Z] Total : 5494.38 21.46 0.00 0.00 22955.86 7918.93 36263.25 00:24:12.913 { 00:24:12.913 "results": [ 00:24:12.913 { 00:24:12.913 "job": "nvme0n1", 00:24:12.913 "core_mask": "0x2", 00:24:12.913 "workload": "verify", 00:24:12.913 "status": "finished", 00:24:12.913 "verify_range": { 00:24:12.913 "start": 0, 00:24:12.913 "length": 8192 00:24:12.913 }, 00:24:12.913 "queue_depth": 128, 00:24:12.913 "io_size": 4096, 00:24:12.913 "runtime": 1.029051, 00:24:12.913 "iops": 5494.382688515924, 00:24:12.913 "mibps": 21.46243237701533, 00:24:12.913 "io_failed": 0, 00:24:12.913 "io_timeout": 0, 00:24:12.913 "avg_latency_us": 22955.855960382032, 00:24:12.913 "min_latency_us": 7918.933333333333, 00:24:12.913 "max_latency_us": 36263.253333333334 00:24:12.913 } 00:24:12.913 ], 00:24:12.913 "core_count": 1 00:24:12.913 } 00:24:12.913 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3919422 00:24:12.913 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3919422 ']' 00:24:12.913 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3919422 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3919422 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3919422' 00:24:13.173 killing process with pid 3919422 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3919422 00:24:13.173 Received shutdown signal, test time was about 1.000000 seconds 00:24:13.173 00:24:13.173 Latency(us) 00:24:13.173 [2024-11-06T09:16:16.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.173 [2024-11-06T09:16:16.674Z] =================================================================================================================== 00:24:13.173 [2024-11-06T09:16:16.674Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3919422 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3918853 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3918853 ']' 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3918853 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3918853 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3918853' 00:24:13.173 killing process with pid 3918853 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3918853 00:24:13.173 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3918853 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3919872 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3919872 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3919872 ']' 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:13.434 10:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.434 [2024-11-06 10:16:16.832888] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:13.434 [2024-11-06 10:16:16.832943] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.434 [2024-11-06 10:16:16.918933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.695 [2024-11-06 10:16:16.953214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.695 [2024-11-06 10:16:16.953253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.695 [2024-11-06 10:16:16.953262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.695 [2024-11-06 10:16:16.953269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.695 [2024-11-06 10:16:16.953275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.695 [2024-11-06 10:16:16.953822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.265 [2024-11-06 10:16:17.674038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.265 malloc0 00:24:14.265 [2024-11-06 10:16:17.700759] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.265 [2024-11-06 10:16:17.700986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3920222 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3920222 /var/tmp/bdevperf.sock 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3920222 ']' 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.265 10:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:14.524 [2024-11-06 10:16:17.779417] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:14.525 [2024-11-06 10:16:17.779465] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920222 ] 00:24:14.525 [2024-11-06 10:16:17.869777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.525 [2024-11-06 10:16:17.899717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.095 10:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:15.095 10:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:15.095 10:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VKr3S3tjPH 00:24:15.355 10:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:15.616 [2024-11-06 10:16:18.899578] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.616 nvme0n1 00:24:15.616 10:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.616 Running I/O for 1 seconds... 00:24:16.999 4181.00 IOPS, 16.33 MiB/s 00:24:16.999 Latency(us) 00:24:16.999 [2024-11-06T09:16:20.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.999 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:16.999 Verification LBA range: start 0x0 length 0x2000 00:24:16.999 nvme0n1 : 1.02 4214.70 16.46 0.00 0.00 30103.17 5980.16 32112.64 00:24:16.999 [2024-11-06T09:16:20.500Z] =================================================================================================================== 00:24:16.999 [2024-11-06T09:16:20.500Z] Total : 4214.70 16.46 0.00 0.00 30103.17 5980.16 32112.64 00:24:16.999 { 00:24:16.999 "results": [ 00:24:16.999 { 00:24:16.999 "job": "nvme0n1", 00:24:16.999 "core_mask": "0x2", 00:24:16.999 "workload": "verify", 00:24:16.999 "status": "finished", 00:24:16.999 "verify_range": { 00:24:16.999 "start": 0, 00:24:16.999 "length": 8192 00:24:16.999 }, 00:24:16.999 "queue_depth": 128, 00:24:16.999 "io_size": 4096, 00:24:16.999 "runtime": 1.022611, 00:24:16.999 "iops": 4214.701386939902, 00:24:16.999 "mibps": 16.46367729273399, 00:24:16.999 "io_failed": 0, 00:24:16.999 "io_timeout": 0, 00:24:16.999 "avg_latency_us": 30103.167109048725, 00:24:16.999 "min_latency_us": 5980.16, 00:24:16.999 "max_latency_us": 32112.64 00:24:16.999 } 00:24:16.999 ], 00:24:16.999 "core_count": 1 00:24:16.999 } 00:24:17.000 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:17.000 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.000 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.000 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.000 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:17.000 "subsystems": [ 00:24:17.000 { 00:24:17.000 "subsystem": "keyring", 00:24:17.000 "config": [ 00:24:17.000 { 00:24:17.000 "method": "keyring_file_add_key", 00:24:17.000 "params": { 00:24:17.000 "name": "key0", 00:24:17.000 "path": "/tmp/tmp.VKr3S3tjPH" 00:24:17.000 } 00:24:17.000 } 00:24:17.000 ] 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "subsystem": "iobuf", 00:24:17.000 "config": [ 00:24:17.000 { 00:24:17.000 "method": "iobuf_set_options", 00:24:17.000 "params": { 00:24:17.000 "small_pool_count": 8192, 00:24:17.000 "large_pool_count": 1024, 00:24:17.000 "small_bufsize": 8192, 00:24:17.000 "large_bufsize": 135168, 00:24:17.000 "enable_numa": false 00:24:17.000 } 00:24:17.000 } 00:24:17.000 ] 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "subsystem": "sock", 00:24:17.000 "config": [ 00:24:17.000 { 00:24:17.000 "method": "sock_set_default_impl", 00:24:17.000 "params": { 00:24:17.000 "impl_name": "posix" 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "sock_impl_set_options", 00:24:17.000 "params": { 00:24:17.000 "impl_name": "ssl", 00:24:17.000 "recv_buf_size": 4096, 00:24:17.000 "send_buf_size": 4096, 00:24:17.000 "enable_recv_pipe": true, 00:24:17.000 "enable_quickack": false, 00:24:17.000 "enable_placement_id": 0, 00:24:17.000 "enable_zerocopy_send_server": true, 00:24:17.000 "enable_zerocopy_send_client": false, 00:24:17.000 "zerocopy_threshold": 0, 00:24:17.000 "tls_version": 0, 00:24:17.000 "enable_ktls": false 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "sock_impl_set_options", 00:24:17.000 "params": { 00:24:17.000 "impl_name": "posix", 00:24:17.000 "recv_buf_size": 2097152, 00:24:17.000 "send_buf_size": 2097152, 00:24:17.000 "enable_recv_pipe": true, 00:24:17.000 "enable_quickack": false, 00:24:17.000 "enable_placement_id": 0, 00:24:17.000 "enable_zerocopy_send_server": true, 00:24:17.000 "enable_zerocopy_send_client": false, 00:24:17.000 "zerocopy_threshold": 0, 00:24:17.000 "tls_version": 0, 00:24:17.000 "enable_ktls": false 00:24:17.000 } 00:24:17.000 } 00:24:17.000 ] 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "subsystem": "vmd", 00:24:17.000 "config": [] 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "subsystem": "accel", 00:24:17.000 "config": [ 00:24:17.000 { 00:24:17.000 "method": "accel_set_options", 00:24:17.000 "params": { 00:24:17.000 "small_cache_size": 128, 00:24:17.000 "large_cache_size": 16, 00:24:17.000 "task_count": 2048, 00:24:17.000 "sequence_count": 2048, 00:24:17.000 "buf_count": 2048 00:24:17.000 } 00:24:17.000 } 00:24:17.000 ] 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "subsystem": "bdev", 00:24:17.000 "config": [ 00:24:17.000 { 00:24:17.000 "method": "bdev_set_options", 00:24:17.000 "params": { 00:24:17.000 "bdev_io_pool_size": 65535, 00:24:17.000 "bdev_io_cache_size": 256, 00:24:17.000 "bdev_auto_examine": true, 00:24:17.000 "iobuf_small_cache_size": 128, 00:24:17.000 "iobuf_large_cache_size": 16 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "bdev_raid_set_options", 00:24:17.000 "params": { 00:24:17.000 "process_window_size_kb": 1024, 00:24:17.000 "process_max_bandwidth_mb_sec": 0 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "bdev_iscsi_set_options", 00:24:17.000 "params": { 00:24:17.000 "timeout_sec": 30 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "bdev_nvme_set_options", 00:24:17.000 "params": { 00:24:17.000 "action_on_timeout": "none", 00:24:17.000 "timeout_us": 0, 00:24:17.000 "timeout_admin_us": 0, 00:24:17.000 "keep_alive_timeout_ms": 10000, 00:24:17.000 "arbitration_burst": 0, 00:24:17.000 "low_priority_weight": 0, 00:24:17.000 "medium_priority_weight": 0, 00:24:17.000 "high_priority_weight": 0, 00:24:17.000 "nvme_adminq_poll_period_us": 10000, 00:24:17.000 "nvme_ioq_poll_period_us": 0, 00:24:17.000 "io_queue_requests": 0, 00:24:17.000 "delay_cmd_submit": true, 00:24:17.000 "transport_retry_count": 4, 00:24:17.000 "bdev_retry_count": 3, 00:24:17.000 "transport_ack_timeout": 0, 00:24:17.000 "ctrlr_loss_timeout_sec": 0, 00:24:17.000 "reconnect_delay_sec": 0, 00:24:17.000 "fast_io_fail_timeout_sec": 0, 00:24:17.000 "disable_auto_failback": false, 00:24:17.000 "generate_uuids": false, 00:24:17.000 "transport_tos": 0, 00:24:17.000 "nvme_error_stat": false, 00:24:17.000 "rdma_srq_size": 0, 00:24:17.000 "io_path_stat": false, 00:24:17.000 "allow_accel_sequence": false, 00:24:17.000 "rdma_max_cq_size": 0, 00:24:17.000 "rdma_cm_event_timeout_ms": 0, 00:24:17.000 "dhchap_digests": [ 00:24:17.000 "sha256", 00:24:17.000 "sha384", 00:24:17.000 "sha512" 00:24:17.000 ], 00:24:17.000 "dhchap_dhgroups": [ 00:24:17.000 "null", 00:24:17.000 "ffdhe2048", 00:24:17.000 "ffdhe3072", 00:24:17.000 "ffdhe4096", 00:24:17.000 "ffdhe6144", 00:24:17.000 "ffdhe8192" 00:24:17.000 ] 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "bdev_nvme_set_hotplug", 00:24:17.000 "params": { 00:24:17.000 "period_us": 100000, 00:24:17.000 "enable": false 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "bdev_malloc_create", 00:24:17.000 "params": { 00:24:17.000 "name": "malloc0", 00:24:17.000 "num_blocks": 8192, 00:24:17.000 "block_size": 4096, 00:24:17.000 "physical_block_size": 4096, 00:24:17.000 "uuid": "01cb357d-d913-4943-ba01-ec196b28e542", 00:24:17.000 "optimal_io_boundary": 0, 00:24:17.000 "md_size": 0, 00:24:17.000 "dif_type": 0, 00:24:17.000 "dif_is_head_of_md": false, 00:24:17.000 "dif_pi_format": 0 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "bdev_wait_for_examine" 00:24:17.000 } 00:24:17.000 ] 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "subsystem": "nbd", 00:24:17.000 "config": [] 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "subsystem": "scheduler", 00:24:17.000 "config": [ 00:24:17.000 { 00:24:17.000 "method": "framework_set_scheduler", 00:24:17.000 "params": { 00:24:17.000 "name": "static" 00:24:17.000 } 00:24:17.000 } 00:24:17.000 ] 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "subsystem": "nvmf", 00:24:17.000 "config": [ 00:24:17.000 { 00:24:17.000 "method": "nvmf_set_config", 00:24:17.000 "params": { 00:24:17.000 "discovery_filter": "match_any", 00:24:17.000 "admin_cmd_passthru": { 00:24:17.000 "identify_ctrlr": false 00:24:17.000 }, 00:24:17.000 "dhchap_digests": [ 00:24:17.000 "sha256", 00:24:17.000 "sha384", 00:24:17.000 "sha512" 00:24:17.000 ], 00:24:17.000 "dhchap_dhgroups": [ 00:24:17.000 "null", 00:24:17.000 "ffdhe2048", 00:24:17.000 "ffdhe3072", 00:24:17.000 "ffdhe4096", 00:24:17.000 "ffdhe6144", 00:24:17.000 "ffdhe8192" 00:24:17.000 ] 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "nvmf_set_max_subsystems", 00:24:17.000 "params": { 00:24:17.000 "max_subsystems": 1024 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "nvmf_set_crdt", 00:24:17.000 "params": { 00:24:17.000 "crdt1": 0, 00:24:17.000 "crdt2": 0, 00:24:17.000 "crdt3": 0 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "nvmf_create_transport", 00:24:17.000 "params": { 00:24:17.000 "trtype": "TCP", 00:24:17.000 "max_queue_depth": 128, 00:24:17.000 "max_io_qpairs_per_ctrlr": 127, 00:24:17.000 "in_capsule_data_size": 4096, 00:24:17.000 "max_io_size": 131072, 00:24:17.000 "io_unit_size": 131072, 00:24:17.000 "max_aq_depth": 128, 00:24:17.000 "num_shared_buffers": 511, 00:24:17.000 "buf_cache_size": 4294967295, 00:24:17.000 "dif_insert_or_strip": false, 00:24:17.000 "zcopy": false, 00:24:17.000 "c2h_success": false, 00:24:17.000 "sock_priority": 0, 00:24:17.000 "abort_timeout_sec": 1, 00:24:17.000 "ack_timeout": 0, 00:24:17.000 "data_wr_pool_size": 0 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "nvmf_create_subsystem", 00:24:17.000 "params": { 00:24:17.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.000 "allow_any_host": false, 00:24:17.000 "serial_number": "00000000000000000000", 00:24:17.000 "model_number": "SPDK bdev Controller", 00:24:17.000 "max_namespaces": 32, 00:24:17.000 "min_cntlid": 1, 00:24:17.000 "max_cntlid": 65519, 00:24:17.000 "ana_reporting": false 00:24:17.000 } 00:24:17.000 }, 00:24:17.000 { 00:24:17.000 "method": "nvmf_subsystem_add_host", 00:24:17.000 "params": { 00:24:17.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.000 "host": "nqn.2016-06.io.spdk:host1", 00:24:17.000 "psk": "key0" 00:24:17.000 } 00:24:17.000 }, 00:24:17.001 { 00:24:17.001 "method": "nvmf_subsystem_add_ns", 00:24:17.001 "params": { 00:24:17.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.001 "namespace": { 00:24:17.001 "nsid": 1, 00:24:17.001 "bdev_name": "malloc0", 00:24:17.001 "nguid": "01CB357DD9134943BA01EC196B28E542", 00:24:17.001 "uuid": "01cb357d-d913-4943-ba01-ec196b28e542", 00:24:17.001 "no_auto_visible": false 00:24:17.001 } 00:24:17.001 } 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "method": "nvmf_subsystem_add_listener", 00:24:17.001 "params": { 00:24:17.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.001 "listen_address": { 00:24:17.001 "trtype": "TCP", 00:24:17.001 "adrfam": "IPv4", 00:24:17.001 "traddr": "10.0.0.2", 00:24:17.001 "trsvcid": "4420" 00:24:17.001 }, 00:24:17.001 "secure_channel": false, 00:24:17.001 "sock_impl": "ssl" 00:24:17.001 } 00:24:17.001 } 00:24:17.001 ] 00:24:17.001 } 00:24:17.001 ] 00:24:17.001 }' 00:24:17.001 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:17.001 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:17.001 "subsystems": [ 00:24:17.001 { 00:24:17.001 "subsystem": "keyring", 00:24:17.001 "config": [ 00:24:17.001 { 00:24:17.001 "method": "keyring_file_add_key", 00:24:17.001 "params": { 00:24:17.001 "name": "key0", 00:24:17.001 "path": "/tmp/tmp.VKr3S3tjPH" 00:24:17.001 } 00:24:17.001 } 00:24:17.001 ] 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "subsystem": "iobuf", 00:24:17.001 "config": [ 00:24:17.001 { 00:24:17.001 "method": "iobuf_set_options", 00:24:17.001 "params": { 00:24:17.001 "small_pool_count": 8192, 00:24:17.001 "large_pool_count": 1024, 00:24:17.001 "small_bufsize": 8192, 00:24:17.001 "large_bufsize": 135168, 00:24:17.001 "enable_numa": false 00:24:17.001 } 00:24:17.001 } 00:24:17.001 ] 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "subsystem": "sock", 00:24:17.001 "config": [ 00:24:17.001 { 00:24:17.001 "method": "sock_set_default_impl", 00:24:17.001 "params": { 00:24:17.001 "impl_name": "posix" 00:24:17.001 } 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "method": "sock_impl_set_options", 00:24:17.001 "params": { 00:24:17.001 "impl_name": "ssl", 00:24:17.001 "recv_buf_size": 4096, 00:24:17.001 "send_buf_size": 4096, 00:24:17.001 "enable_recv_pipe": true, 00:24:17.001 "enable_quickack": false, 00:24:17.001 "enable_placement_id": 0, 00:24:17.001 "enable_zerocopy_send_server": true, 00:24:17.001 "enable_zerocopy_send_client": false, 00:24:17.001 "zerocopy_threshold": 0, 00:24:17.001 "tls_version": 0, 00:24:17.001 "enable_ktls": false 00:24:17.001 } 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "method": "sock_impl_set_options", 00:24:17.001 "params": { 00:24:17.001 "impl_name": "posix", 00:24:17.001 "recv_buf_size": 2097152, 00:24:17.001 "send_buf_size": 2097152, 00:24:17.001 "enable_recv_pipe": true, 00:24:17.001 "enable_quickack": false, 00:24:17.001 "enable_placement_id": 0, 00:24:17.001 "enable_zerocopy_send_server": true, 00:24:17.001 "enable_zerocopy_send_client": false, 00:24:17.001 "zerocopy_threshold": 0, 00:24:17.001 "tls_version": 0, 00:24:17.001 "enable_ktls": false 00:24:17.001 } 00:24:17.001 } 00:24:17.001 ] 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "subsystem": "vmd", 00:24:17.001 "config": [] 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "subsystem": "accel", 00:24:17.001 "config": [ 00:24:17.001 { 00:24:17.001 "method": "accel_set_options", 00:24:17.001 "params": { 00:24:17.001 "small_cache_size": 128, 00:24:17.001 "large_cache_size": 16, 00:24:17.001 "task_count": 2048, 00:24:17.001 "sequence_count": 2048, 00:24:17.001 "buf_count": 2048 00:24:17.001 } 00:24:17.001 } 00:24:17.001 ] 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "subsystem": "bdev", 00:24:17.001 "config": [ 00:24:17.001 { 00:24:17.001 "method": "bdev_set_options", 00:24:17.001 "params": { 00:24:17.001 "bdev_io_pool_size": 65535, 00:24:17.001 "bdev_io_cache_size": 256, 00:24:17.001 "bdev_auto_examine": true, 00:24:17.001 "iobuf_small_cache_size": 128, 00:24:17.001 "iobuf_large_cache_size": 16 00:24:17.001 } 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "method": "bdev_raid_set_options", 00:24:17.001 "params": { 00:24:17.001 "process_window_size_kb": 1024, 00:24:17.001 "process_max_bandwidth_mb_sec": 0 00:24:17.001 } 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "method": "bdev_iscsi_set_options", 00:24:17.001 "params": { 00:24:17.001 "timeout_sec": 30 00:24:17.001 } 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "method": "bdev_nvme_set_options", 00:24:17.001 "params": { 00:24:17.001 "action_on_timeout": "none", 00:24:17.001 "timeout_us": 0, 00:24:17.001 "timeout_admin_us": 0, 00:24:17.001 "keep_alive_timeout_ms": 10000, 00:24:17.001 "arbitration_burst": 0, 00:24:17.001 "low_priority_weight": 0, 00:24:17.001 "medium_priority_weight": 0, 00:24:17.001 "high_priority_weight": 0, 00:24:17.001 "nvme_adminq_poll_period_us": 10000, 00:24:17.001 "nvme_ioq_poll_period_us": 0, 00:24:17.001 "io_queue_requests": 512, 00:24:17.001 "delay_cmd_submit": true, 00:24:17.001 "transport_retry_count": 4, 00:24:17.001 "bdev_retry_count": 3, 00:24:17.001 "transport_ack_timeout": 0, 00:24:17.001 "ctrlr_loss_timeout_sec": 0, 00:24:17.001 "reconnect_delay_sec": 0, 00:24:17.001 "fast_io_fail_timeout_sec": 0, 00:24:17.001 "disable_auto_failback": false, 00:24:17.001 "generate_uuids": false, 00:24:17.001 "transport_tos": 0, 00:24:17.001 "nvme_error_stat": false, 00:24:17.001 "rdma_srq_size": 0, 00:24:17.001 "io_path_stat": false, 00:24:17.001 "allow_accel_sequence": false, 00:24:17.001 "rdma_max_cq_size": 0, 00:24:17.001 "rdma_cm_event_timeout_ms": 0, 00:24:17.001 "dhchap_digests": [ 00:24:17.001 "sha256", 00:24:17.001 "sha384", 00:24:17.001 "sha512" 00:24:17.001 ], 00:24:17.001 "dhchap_dhgroups": [ 00:24:17.001 "null", 00:24:17.001 "ffdhe2048", 00:24:17.001 "ffdhe3072", 00:24:17.001 "ffdhe4096", 00:24:17.001 "ffdhe6144", 00:24:17.001 "ffdhe8192" 00:24:17.001 ] 00:24:17.001 } 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "method": "bdev_nvme_attach_controller", 00:24:17.001 "params": { 00:24:17.001 "name": "nvme0", 00:24:17.001 "trtype": "TCP", 00:24:17.001 "adrfam": "IPv4", 00:24:17.001 "traddr": "10.0.0.2", 00:24:17.001 "trsvcid": "4420", 00:24:17.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.001 "prchk_reftag": false, 00:24:17.001 "prchk_guard": false, 00:24:17.001 "ctrlr_loss_timeout_sec": 0, 00:24:17.001 "reconnect_delay_sec": 0, 00:24:17.001 "fast_io_fail_timeout_sec": 0, 00:24:17.001 "psk": "key0", 00:24:17.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:17.001 "hdgst": false, 00:24:17.001 "ddgst": false, 00:24:17.001 "multipath": "multipath" 00:24:17.001 } 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "method": "bdev_nvme_set_hotplug", 00:24:17.001 "params": { 00:24:17.001 "period_us": 100000, 00:24:17.001 "enable": false 00:24:17.001 } 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "method": "bdev_enable_histogram", 00:24:17.001 "params": { 00:24:17.001 "name": "nvme0n1", 00:24:17.001 "enable": true 00:24:17.001 } 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "method": "bdev_wait_for_examine" 00:24:17.001 } 00:24:17.001 ] 00:24:17.001 }, 00:24:17.001 { 00:24:17.001 "subsystem": "nbd", 00:24:17.001 "config": [] 00:24:17.001 } 00:24:17.001 ] 00:24:17.001 }' 00:24:17.001 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3920222 00:24:17.001 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3920222 ']' 00:24:17.002 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3920222 00:24:17.002 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:17.002 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:17.002 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3920222 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3920222' 00:24:17.263 killing process with pid 3920222 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3920222 00:24:17.263 Received shutdown signal, test time was about 1.000000 seconds 00:24:17.263 00:24:17.263 Latency(us) 00:24:17.263 [2024-11-06T09:16:20.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.263 [2024-11-06T09:16:20.764Z] =================================================================================================================== 00:24:17.263 [2024-11-06T09:16:20.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3920222 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3919872 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3919872 ']' 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3919872 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3919872 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3919872' 00:24:17.263 killing process with pid 3919872 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3919872 00:24:17.263 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3919872 00:24:17.525 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:17.525 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:17.525 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.525 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.525 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:17.525 "subsystems": [ 00:24:17.525 { 00:24:17.525 "subsystem": "keyring", 00:24:17.525 "config": [ 00:24:17.525 { 00:24:17.525 "method": "keyring_file_add_key", 00:24:17.525 "params": { 00:24:17.525 "name": "key0", 00:24:17.525 "path": "/tmp/tmp.VKr3S3tjPH" 00:24:17.525 } 00:24:17.525 } 00:24:17.525 ] 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "subsystem": "iobuf", 00:24:17.525 "config": [ 00:24:17.525 { 00:24:17.525 "method": "iobuf_set_options", 00:24:17.525 "params": { 00:24:17.525 "small_pool_count": 8192, 00:24:17.525 "large_pool_count": 1024, 00:24:17.525 "small_bufsize": 8192, 00:24:17.525 "large_bufsize": 135168, 00:24:17.525 "enable_numa": false 00:24:17.525 } 00:24:17.525 } 00:24:17.525 ] 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "subsystem": "sock", 00:24:17.525 "config": [ 00:24:17.525 { 00:24:17.525 "method": "sock_set_default_impl", 00:24:17.525 "params": { 00:24:17.525 "impl_name": "posix" 00:24:17.525 } 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "method": "sock_impl_set_options", 00:24:17.525 "params": { 00:24:17.525 "impl_name": "ssl", 00:24:17.525 "recv_buf_size": 4096, 00:24:17.525 "send_buf_size": 4096, 00:24:17.525 "enable_recv_pipe": true, 00:24:17.525 "enable_quickack": false, 00:24:17.525 "enable_placement_id": 0, 00:24:17.525 "enable_zerocopy_send_server": true, 00:24:17.525 "enable_zerocopy_send_client": false, 00:24:17.525 "zerocopy_threshold": 0, 00:24:17.525 "tls_version": 0, 00:24:17.525 "enable_ktls": false 00:24:17.525 } 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "method": "sock_impl_set_options", 00:24:17.525 "params": { 00:24:17.525 "impl_name": "posix", 00:24:17.525 "recv_buf_size": 2097152, 00:24:17.525 "send_buf_size": 2097152, 00:24:17.525 "enable_recv_pipe": true, 00:24:17.525 "enable_quickack": false, 00:24:17.525 "enable_placement_id": 0, 00:24:17.525 "enable_zerocopy_send_server": true, 00:24:17.525 "enable_zerocopy_send_client": false, 00:24:17.525 "zerocopy_threshold": 0, 00:24:17.525 "tls_version": 0, 00:24:17.525 "enable_ktls": false 00:24:17.525 } 00:24:17.525 } 00:24:17.525 ] 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "subsystem": "vmd", 00:24:17.525 "config": [] 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "subsystem": "accel", 00:24:17.525 "config": [ 00:24:17.525 { 00:24:17.525 "method": "accel_set_options", 00:24:17.525 "params": { 00:24:17.525 "small_cache_size": 128, 00:24:17.525 "large_cache_size": 16, 00:24:17.525 "task_count": 2048, 00:24:17.525 "sequence_count": 2048, 00:24:17.525 "buf_count": 2048 00:24:17.525 } 00:24:17.525 } 00:24:17.525 ] 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "subsystem": "bdev", 00:24:17.525 "config": [ 00:24:17.525 { 00:24:17.525 "method": "bdev_set_options", 00:24:17.525 "params": { 00:24:17.525 "bdev_io_pool_size": 65535, 00:24:17.525 "bdev_io_cache_size": 256, 00:24:17.525 "bdev_auto_examine": true, 00:24:17.525 "iobuf_small_cache_size": 128, 00:24:17.525 "iobuf_large_cache_size": 16 00:24:17.525 } 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "method": "bdev_raid_set_options", 00:24:17.525 "params": { 00:24:17.525 "process_window_size_kb": 1024, 00:24:17.525 "process_max_bandwidth_mb_sec": 0 00:24:17.525 } 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "method": "bdev_iscsi_set_options", 00:24:17.525 "params": { 00:24:17.525 "timeout_sec": 30 00:24:17.525 } 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "method": "bdev_nvme_set_options", 00:24:17.525 "params": { 00:24:17.525 "action_on_timeout": "none", 00:24:17.525 "timeout_us": 0, 00:24:17.525 "timeout_admin_us": 0, 00:24:17.525 "keep_alive_timeout_ms": 10000, 00:24:17.525 "arbitration_burst": 0, 00:24:17.525 "low_priority_weight": 0, 00:24:17.525 "medium_priority_weight": 0, 00:24:17.525 "high_priority_weight": 0, 00:24:17.525 "nvme_adminq_poll_period_us": 10000, 00:24:17.525 "nvme_ioq_poll_period_us": 0, 00:24:17.525 "io_queue_requests": 0, 00:24:17.525 "delay_cmd_submit": true, 00:24:17.525 "transport_retry_count": 4, 00:24:17.525 "bdev_retry_count": 3, 00:24:17.525 "transport_ack_timeout": 0, 00:24:17.525 "ctrlr_loss_timeout_sec": 0, 00:24:17.525 "reconnect_delay_sec": 0, 00:24:17.525 "fast_io_fail_timeout_sec": 0, 00:24:17.525 "disable_auto_failback": false, 00:24:17.525 "generate_uuids": false, 00:24:17.525 "transport_tos": 0, 00:24:17.525 "nvme_error_stat": false, 00:24:17.525 "rdma_srq_size": 0, 00:24:17.525 "io_path_stat": false, 00:24:17.525 "allow_accel_sequence": false, 00:24:17.525 "rdma_max_cq_size": 0, 00:24:17.525 "rdma_cm_event_timeout_ms": 0, 00:24:17.525 "dhchap_digests": [ 00:24:17.525 "sha256", 00:24:17.525 "sha384", 00:24:17.525 "sha512" 00:24:17.525 ], 00:24:17.525 "dhchap_dhgroups": [ 00:24:17.525 "null", 00:24:17.525 "ffdhe2048", 00:24:17.525 "ffdhe3072", 00:24:17.525 "ffdhe4096", 00:24:17.525 "ffdhe6144", 00:24:17.525 "ffdhe8192" 00:24:17.525 ] 00:24:17.525 } 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "method": "bdev_nvme_set_hotplug", 00:24:17.525 "params": { 00:24:17.525 "period_us": 100000, 00:24:17.525 "enable": false 00:24:17.525 } 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "method": "bdev_malloc_create", 00:24:17.525 "params": { 00:24:17.525 "name": "malloc0", 00:24:17.525 "num_blocks": 8192, 00:24:17.525 "block_size": 4096, 00:24:17.525 "physical_block_size": 4096, 00:24:17.525 "uuid": "01cb357d-d913-4943-ba01-ec196b28e542", 00:24:17.525 "optimal_io_boundary": 0, 00:24:17.525 "md_size": 0, 00:24:17.525 "dif_type": 0, 00:24:17.525 "dif_is_head_of_md": false, 00:24:17.525 "dif_pi_format": 0 00:24:17.525 } 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "method": "bdev_wait_for_examine" 00:24:17.525 } 00:24:17.525 ] 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "subsystem": "nbd", 00:24:17.525 "config": [] 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "subsystem": "scheduler", 00:24:17.525 "config": [ 00:24:17.525 { 00:24:17.525 "method": "framework_set_scheduler", 00:24:17.525 "params": { 00:24:17.525 "name": "static" 00:24:17.525 } 00:24:17.525 } 00:24:17.525 ] 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "subsystem": "nvmf", 00:24:17.525 "config": [ 00:24:17.525 { 00:24:17.525 "method": "nvmf_set_config", 00:24:17.525 "params": { 00:24:17.525 "discovery_filter": "match_any", 00:24:17.525 "admin_cmd_passthru": { 00:24:17.525 "identify_ctrlr": false 00:24:17.525 }, 00:24:17.525 "dhchap_digests": [ 00:24:17.525 "sha256", 00:24:17.525 "sha384", 00:24:17.525 "sha512" 00:24:17.525 ], 00:24:17.525 "dhchap_dhgroups": [ 00:24:17.525 "null", 00:24:17.525 "ffdhe2048", 00:24:17.525 "ffdhe3072", 00:24:17.525 "ffdhe4096", 00:24:17.525 "ffdhe6144", 00:24:17.525 "ffdhe8192" 00:24:17.525 ] 00:24:17.525 } 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "method": "nvmf_set_max_subsystems", 00:24:17.525 "params": { 00:24:17.525 "max_subsystems": 1024 00:24:17.525 } 00:24:17.525 }, 00:24:17.525 { 00:24:17.525 "method": "nvmf_set_crdt", 00:24:17.525 "params": { 00:24:17.525 "crdt1": 0, 00:24:17.526 "crdt2": 0, 00:24:17.526 "crdt3": 0 00:24:17.526 } 00:24:17.526 }, 00:24:17.526 { 00:24:17.526 "method": "nvmf_create_transport", 00:24:17.526 "params": { 00:24:17.526 "trtype": "TCP", 00:24:17.526 "max_queue_depth": 128, 00:24:17.526 "max_io_qpairs_per_ctrlr": 127, 00:24:17.526 "in_capsule_data_size": 4096, 00:24:17.526 "max_io_size": 131072, 00:24:17.526 "io_unit_size": 131072, 00:24:17.526 "max_aq_depth": 128, 00:24:17.526 "num_shared_buffers": 511, 00:24:17.526 "buf_cache_size": 4294967295, 00:24:17.526 "dif_insert_or_strip": false, 00:24:17.526 "zcopy": false, 00:24:17.526 "c2h_success": false, 00:24:17.526 "sock_priority": 0, 00:24:17.526 "abort_timeout_sec": 1, 00:24:17.526 "ack_timeout": 0, 00:24:17.526 "data_wr_pool_size": 0 00:24:17.526 } 00:24:17.526 }, 00:24:17.526 { 00:24:17.526 "method": "nvmf_create_subsystem", 00:24:17.526 "params": { 00:24:17.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.526 "allow_any_host": false, 00:24:17.526 "serial_number": "00000000000000000000", 00:24:17.526 "model_number": "SPDK bdev Controller", 00:24:17.526 "max_namespaces": 32, 00:24:17.526 "min_cntlid": 1, 00:24:17.526 "max_cntlid": 65519, 00:24:17.526 "ana_reporting": false 00:24:17.526 } 00:24:17.526 }, 00:24:17.526 { 00:24:17.526 "method": "nvmf_subsystem_add_host", 00:24:17.526 "params": { 00:24:17.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.526 "host": "nqn.2016-06.io.spdk:host1", 00:24:17.526 "psk": "key0" 00:24:17.526 } 00:24:17.526 }, 00:24:17.526 { 00:24:17.526 "method": "nvmf_subsystem_add_ns", 00:24:17.526 "params": { 00:24:17.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.526 "namespace": { 00:24:17.526 "nsid": 1, 00:24:17.526 "bdev_name": "malloc0", 00:24:17.526 "nguid": "01CB357DD9134943BA01EC196B28E542", 00:24:17.526 "uuid": "01cb357d-d913-4943-ba01-ec196b28e542", 00:24:17.526 "no_auto_visible": false 00:24:17.526 } 00:24:17.526 } 00:24:17.526 }, 00:24:17.526 { 00:24:17.526 "method": "nvmf_subsystem_add_listener", 00:24:17.526 "params": { 00:24:17.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.526 "listen_address": { 00:24:17.526 "trtype": "TCP", 00:24:17.526 "adrfam": "IPv4", 00:24:17.526 "traddr": "10.0.0.2", 00:24:17.526 "trsvcid": "4420" 00:24:17.526 }, 00:24:17.526 "secure_channel": false, 00:24:17.526 "sock_impl": "ssl" 00:24:17.526 } 00:24:17.526 } 00:24:17.526 ] 00:24:17.526 } 00:24:17.526 ] 00:24:17.526 }' 00:24:17.526 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3920768 00:24:17.526 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3920768 00:24:17.526 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:17.526 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3920768 ']' 00:24:17.526 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.526 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:17.526 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.526 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:17.526 10:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.526 [2024-11-06 10:16:20.913467] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:17.526 [2024-11-06 10:16:20.913525] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.526 [2024-11-06 10:16:20.997913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.786 [2024-11-06 10:16:21.033791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.786 [2024-11-06 10:16:21.033827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.786 [2024-11-06 10:16:21.033836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.786 [2024-11-06 10:16:21.033844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.786 [2024-11-06 10:16:21.033851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.786 [2024-11-06 10:16:21.034451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.786 [2024-11-06 10:16:21.233099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.786 [2024-11-06 10:16:21.265111] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.786 [2024-11-06 10:16:21.265330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3920936 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3920936 /var/tmp/bdevperf.sock 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3920936 ']' 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.357 10:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:18.357 "subsystems": [ 00:24:18.357 { 00:24:18.357 "subsystem": "keyring", 00:24:18.357 "config": [ 00:24:18.357 { 00:24:18.357 "method": "keyring_file_add_key", 00:24:18.357 "params": { 00:24:18.357 "name": "key0", 00:24:18.357 "path": "/tmp/tmp.VKr3S3tjPH" 00:24:18.357 } 00:24:18.357 } 00:24:18.357 ] 00:24:18.357 }, 00:24:18.357 { 00:24:18.357 "subsystem": "iobuf", 00:24:18.357 "config": [ 00:24:18.357 { 00:24:18.357 "method": "iobuf_set_options", 00:24:18.357 "params": { 00:24:18.357 "small_pool_count": 8192, 00:24:18.357 "large_pool_count": 1024, 00:24:18.357 "small_bufsize": 8192, 00:24:18.357 "large_bufsize": 135168, 00:24:18.357 "enable_numa": false 00:24:18.357 } 00:24:18.357 } 00:24:18.357 ] 00:24:18.357 }, 00:24:18.357 { 00:24:18.357 "subsystem": "sock", 00:24:18.357 "config": [ 00:24:18.357 { 00:24:18.357 "method": "sock_set_default_impl", 00:24:18.357 "params": { 00:24:18.357 "impl_name": "posix" 00:24:18.357 } 00:24:18.357 }, 00:24:18.357 { 00:24:18.357 "method": "sock_impl_set_options", 00:24:18.357 "params": { 00:24:18.357 "impl_name": "ssl", 00:24:18.357 "recv_buf_size": 4096, 00:24:18.357 "send_buf_size": 4096, 00:24:18.357 "enable_recv_pipe": true, 00:24:18.357 "enable_quickack": false, 00:24:18.357 "enable_placement_id": 0, 00:24:18.357 "enable_zerocopy_send_server": true, 00:24:18.357 "enable_zerocopy_send_client": false, 00:24:18.357 "zerocopy_threshold": 0, 00:24:18.357 "tls_version": 0, 00:24:18.357 "enable_ktls": false 00:24:18.357 } 00:24:18.357 }, 00:24:18.357 { 00:24:18.357 "method": "sock_impl_set_options", 00:24:18.357 "params": { 00:24:18.357 "impl_name": "posix", 00:24:18.357 "recv_buf_size": 2097152, 00:24:18.357 "send_buf_size": 2097152, 00:24:18.357 "enable_recv_pipe": true, 00:24:18.357 "enable_quickack": false, 00:24:18.357 "enable_placement_id": 0, 00:24:18.357 "enable_zerocopy_send_server": true, 00:24:18.357 "enable_zerocopy_send_client": false, 00:24:18.357 "zerocopy_threshold": 0, 00:24:18.357 "tls_version": 0, 00:24:18.357 "enable_ktls": false 00:24:18.357 } 00:24:18.357 } 00:24:18.357 ] 00:24:18.357 }, 00:24:18.357 { 00:24:18.357 "subsystem": "vmd", 00:24:18.357 "config": [] 00:24:18.357 }, 00:24:18.357 { 00:24:18.357 "subsystem": "accel", 00:24:18.357 "config": [ 00:24:18.357 { 00:24:18.357 "method": "accel_set_options", 00:24:18.357 "params": { 00:24:18.357 "small_cache_size": 128, 00:24:18.357 "large_cache_size": 16, 00:24:18.357 "task_count": 2048, 00:24:18.357 "sequence_count": 2048, 00:24:18.357 "buf_count": 2048 00:24:18.357 } 00:24:18.357 } 00:24:18.357 ] 00:24:18.357 }, 00:24:18.357 { 00:24:18.357 "subsystem": "bdev", 00:24:18.357 "config": [ 00:24:18.357 { 00:24:18.357 "method": "bdev_set_options", 00:24:18.357 "params": { 00:24:18.357 "bdev_io_pool_size": 65535, 00:24:18.357 "bdev_io_cache_size": 256, 00:24:18.357 "bdev_auto_examine": true, 00:24:18.357 "iobuf_small_cache_size": 128, 00:24:18.357 "iobuf_large_cache_size": 16 00:24:18.357 } 00:24:18.357 }, 00:24:18.357 { 00:24:18.357 "method": "bdev_raid_set_options", 00:24:18.357 "params": { 00:24:18.357 "process_window_size_kb": 1024, 00:24:18.357 "process_max_bandwidth_mb_sec": 0 00:24:18.357 } 00:24:18.357 }, 00:24:18.357 { 00:24:18.357 "method": "bdev_iscsi_set_options", 00:24:18.357 "params": { 00:24:18.357 "timeout_sec": 30 00:24:18.357 } 00:24:18.357 }, 00:24:18.357 { 00:24:18.357 "method": "bdev_nvme_set_options", 00:24:18.357 "params": { 00:24:18.357 "action_on_timeout": "none", 00:24:18.357 "timeout_us": 0, 00:24:18.357 "timeout_admin_us": 0, 00:24:18.357 "keep_alive_timeout_ms": 10000, 00:24:18.357 "arbitration_burst": 0, 00:24:18.357 "low_priority_weight": 0, 00:24:18.357 "medium_priority_weight": 0, 00:24:18.357 "high_priority_weight": 0, 00:24:18.357 "nvme_adminq_poll_period_us": 10000, 00:24:18.357 "nvme_ioq_poll_period_us": 0, 00:24:18.357 "io_queue_requests": 512, 00:24:18.357 "delay_cmd_submit": true, 00:24:18.357 "transport_retry_count": 4, 00:24:18.357 "bdev_retry_count": 3, 00:24:18.357 "transport_ack_timeout": 0, 00:24:18.357 "ctrlr_loss_timeout_sec": 0, 00:24:18.357 "reconnect_delay_sec": 0, 00:24:18.357 "fast_io_fail_timeout_sec": 0, 00:24:18.358 "disable_auto_failback": false, 00:24:18.358 "generate_uuids": false, 00:24:18.358 "transport_tos": 0, 00:24:18.358 "nvme_error_stat": false, 00:24:18.358 "rdma_srq_size": 0, 00:24:18.358 "io_path_stat": false, 00:24:18.358 "allow_accel_sequence": false, 00:24:18.358 "rdma_max_cq_size": 0, 00:24:18.358 "rdma_cm_event_timeout_ms": 0, 00:24:18.358 "dhchap_digests": [ 00:24:18.358 "sha256", 00:24:18.358 "sha384", 00:24:18.358 "sha512" 00:24:18.358 ], 00:24:18.358 "dhchap_dhgroups": [ 00:24:18.358 "null", 00:24:18.358 "ffdhe2048", 00:24:18.358 "ffdhe3072", 00:24:18.358 "ffdhe4096", 00:24:18.358 "ffdhe6144", 00:24:18.358 "ffdhe8192" 00:24:18.358 ] 00:24:18.358 } 00:24:18.358 }, 00:24:18.358 { 00:24:18.358 "method": "bdev_nvme_attach_controller", 00:24:18.358 "params": { 00:24:18.358 "name": "nvme0", 00:24:18.358 "trtype": "TCP", 00:24:18.358 "adrfam": "IPv4", 00:24:18.358 "traddr": "10.0.0.2", 00:24:18.358 "trsvcid": "4420", 00:24:18.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.358 "prchk_reftag": false, 00:24:18.358 "prchk_guard": false, 00:24:18.358 "ctrlr_loss_timeout_sec": 0, 00:24:18.358 "reconnect_delay_sec": 0, 00:24:18.358 "fast_io_fail_timeout_sec": 0, 00:24:18.358 "psk": "key0", 00:24:18.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.358 "hdgst": false, 00:24:18.358 "ddgst": false, 00:24:18.358 "multipath": "multipath" 00:24:18.358 } 00:24:18.358 }, 00:24:18.358 { 00:24:18.358 "method": "bdev_nvme_set_hotplug", 00:24:18.358 "params": { 00:24:18.358 "period_us": 100000, 00:24:18.358 "enable": false 00:24:18.358 } 00:24:18.358 }, 00:24:18.358 { 00:24:18.358 "method": "bdev_enable_histogram", 00:24:18.358 "params": { 00:24:18.358 "name": "nvme0n1", 00:24:18.358 "enable": true 00:24:18.358 } 00:24:18.358 }, 00:24:18.358 { 00:24:18.358 "method": "bdev_wait_for_examine" 00:24:18.358 } 00:24:18.358 ] 00:24:18.358 }, 00:24:18.358 { 00:24:18.358 "subsystem": "nbd", 00:24:18.358 "config": [] 00:24:18.358 } 00:24:18.358 ] 00:24:18.358 }' 00:24:18.358 [2024-11-06 10:16:21.784070] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:18.358 [2024-11-06 10:16:21.784120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920936 ] 00:24:18.618 [2024-11-06 10:16:21.873215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.618 [2024-11-06 10:16:21.903757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.618 [2024-11-06 10:16:22.038860] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:19.189 10:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:19.189 10:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:19.189 10:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:19.189 10:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:19.449 10:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.449 10:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:19.449 Running I/O for 1 seconds... 00:24:20.391 4767.00 IOPS, 18.62 MiB/s 00:24:20.391 Latency(us) 00:24:20.391 [2024-11-06T09:16:23.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.391 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:20.391 Verification LBA range: start 0x0 length 0x2000 00:24:20.391 nvme0n1 : 1.02 4808.84 18.78 0.00 0.00 26437.55 6690.13 60293.12 00:24:20.391 [2024-11-06T09:16:23.892Z] =================================================================================================================== 00:24:20.391 [2024-11-06T09:16:23.892Z] Total : 4808.84 18.78 0.00 0.00 26437.55 6690.13 60293.12 00:24:20.391 { 00:24:20.391 "results": [ 00:24:20.391 { 00:24:20.391 "job": "nvme0n1", 00:24:20.391 "core_mask": "0x2", 00:24:20.391 "workload": "verify", 00:24:20.391 "status": "finished", 00:24:20.391 "verify_range": { 00:24:20.391 "start": 0, 00:24:20.391 "length": 8192 00:24:20.391 }, 00:24:20.391 "queue_depth": 128, 00:24:20.391 "io_size": 4096, 00:24:20.391 "runtime": 1.017917, 00:24:20.391 "iops": 4808.840013478505, 00:24:20.391 "mibps": 18.78453130265041, 00:24:20.391 "io_failed": 0, 00:24:20.391 "io_timeout": 0, 00:24:20.391 "avg_latency_us": 26437.55320394961, 00:24:20.391 "min_latency_us": 6690.133333333333, 00:24:20.391 "max_latency_us": 60293.12 00:24:20.391 } 00:24:20.391 ], 00:24:20.391 "core_count": 1 00:24:20.391 } 00:24:20.391 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:20.391 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:20.391 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:20.391 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:24:20.391 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:24:20.391 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:20.391 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:20.391 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:20.391 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:20.391 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:20.391 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:20.391 nvmf_trace.0 00:24:20.652 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:24:20.652 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3920936 00:24:20.652 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3920936 ']' 00:24:20.652 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3920936 00:24:20.652 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:20.652 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:20.652 10:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3920936 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3920936' 00:24:20.652 killing process with pid 3920936 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3920936 00:24:20.652 Received shutdown signal, test time was about 1.000000 seconds 00:24:20.652 00:24:20.652 Latency(us) 00:24:20.652 [2024-11-06T09:16:24.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.652 [2024-11-06T09:16:24.153Z] =================================================================================================================== 00:24:20.652 [2024-11-06T09:16:24.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3920936 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.652 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.652 rmmod nvme_tcp 00:24:20.652 rmmod nvme_fabrics 00:24:20.912 rmmod nvme_keyring 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3920768 ']' 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3920768 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3920768 ']' 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3920768 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3920768 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3920768' 00:24:20.912 killing process with pid 3920768 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3920768 00:24:20.912 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3920768 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.913 10:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZA1AqHZlk0 /tmp/tmp.TAOJxOAhTC /tmp/tmp.VKr3S3tjPH 00:24:23.457 00:24:23.457 real 1m23.979s 00:24:23.457 user 2m8.914s 00:24:23.457 sys 0m27.591s 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.457 ************************************ 00:24:23.457 END TEST nvmf_tls 00:24:23.457 ************************************ 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:23.457 ************************************ 00:24:23.457 START TEST nvmf_fips 00:24:23.457 ************************************ 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:23.457 * Looking for test storage... 00:24:23.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.457 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:23.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.458 --rc genhtml_branch_coverage=1 00:24:23.458 --rc genhtml_function_coverage=1 00:24:23.458 --rc genhtml_legend=1 00:24:23.458 --rc geninfo_all_blocks=1 00:24:23.458 --rc geninfo_unexecuted_blocks=1 00:24:23.458 00:24:23.458 ' 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:23.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.458 --rc genhtml_branch_coverage=1 00:24:23.458 --rc genhtml_function_coverage=1 00:24:23.458 --rc genhtml_legend=1 00:24:23.458 --rc geninfo_all_blocks=1 00:24:23.458 --rc geninfo_unexecuted_blocks=1 00:24:23.458 00:24:23.458 ' 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:23.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.458 --rc genhtml_branch_coverage=1 00:24:23.458 --rc genhtml_function_coverage=1 00:24:23.458 --rc genhtml_legend=1 00:24:23.458 --rc geninfo_all_blocks=1 00:24:23.458 --rc geninfo_unexecuted_blocks=1 00:24:23.458 00:24:23.458 ' 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:23.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.458 --rc genhtml_branch_coverage=1 00:24:23.458 --rc genhtml_function_coverage=1 00:24:23.458 --rc genhtml_legend=1 00:24:23.458 --rc geninfo_all_blocks=1 00:24:23.458 --rc geninfo_unexecuted_blocks=1 00:24:23.458 00:24:23.458 ' 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.458 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:23.459 Error setting digest 00:24:23.459 40D2C8EEEC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:23.459 40D2C8EEEC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.459 10:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:33.458 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:33.459 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:33.459 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:33.459 Found net devices under 0000:31:00.0: cvl_0_0 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:33.459 Found net devices under 0000:31:00.1: cvl_0_1 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:24:33.459 00:24:33.459 --- 10.0.0.2 ping statistics --- 00:24:33.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.459 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:24:33.459 00:24:33.459 --- 10.0.0.1 ping statistics --- 00:24:33.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.459 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.459 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3926315 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3926315 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3926315 ']' 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:33.460 10:16:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:33.460 [2024-11-06 10:16:35.575816] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:33.460 [2024-11-06 10:16:35.575881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.460 [2024-11-06 10:16:35.685112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.460 [2024-11-06 10:16:35.734322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.460 [2024-11-06 10:16:35.734383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.460 [2024-11-06 10:16:35.734393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.460 [2024-11-06 10:16:35.734400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.460 [2024-11-06 10:16:35.734407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.460 [2024-11-06 10:16:35.735275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Srj 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Srj 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Srj 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Srj 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:33.460 [2024-11-06 10:16:36.584600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.460 [2024-11-06 10:16:36.600598] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:33.460 [2024-11-06 10:16:36.600953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.460 malloc0 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3926520 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3926520 /var/tmp/bdevperf.sock 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3926520 ']' 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:33.460 10:16:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:33.460 [2024-11-06 10:16:36.742468] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:33.460 [2024-11-06 10:16:36.742548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926520 ] 00:24:33.460 [2024-11-06 10:16:36.818219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.460 [2024-11-06 10:16:36.854423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.401 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:34.401 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:34.401 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Srj 00:24:34.401 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:34.401 [2024-11-06 10:16:37.865066] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:34.661 TLSTESTn1 00:24:34.661 10:16:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:34.661 Running I/O for 10 seconds... 00:24:36.983 5731.00 IOPS, 22.39 MiB/s [2024-11-06T09:16:41.424Z] 5686.50 IOPS, 22.21 MiB/s [2024-11-06T09:16:42.365Z] 5786.67 IOPS, 22.60 MiB/s [2024-11-06T09:16:43.307Z] 5659.50 IOPS, 22.11 MiB/s [2024-11-06T09:16:44.253Z] 5744.60 IOPS, 22.44 MiB/s [2024-11-06T09:16:45.195Z] 5596.00 IOPS, 21.86 MiB/s [2024-11-06T09:16:46.135Z] 5671.71 IOPS, 22.16 MiB/s [2024-11-06T09:16:47.515Z] 5563.62 IOPS, 21.73 MiB/s [2024-11-06T09:16:48.085Z] 5527.22 IOPS, 21.59 MiB/s [2024-11-06T09:16:48.345Z] 5378.30 IOPS, 21.01 MiB/s 00:24:44.844 Latency(us) 00:24:44.844 [2024-11-06T09:16:48.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.844 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:44.844 Verification LBA range: start 0x0 length 0x2000 00:24:44.844 TLSTESTn1 : 10.02 5381.34 21.02 0.00 0.00 23750.58 4642.13 24794.45 00:24:44.844 [2024-11-06T09:16:48.345Z] =================================================================================================================== 00:24:44.844 [2024-11-06T09:16:48.345Z] Total : 5381.34 21.02 0.00 0.00 23750.58 4642.13 24794.45 00:24:44.844 { 00:24:44.844 "results": [ 00:24:44.844 { 00:24:44.844 "job": "TLSTESTn1", 00:24:44.844 "core_mask": "0x4", 00:24:44.844 "workload": "verify", 00:24:44.844 "status": "finished", 00:24:44.844 "verify_range": { 00:24:44.844 "start": 0, 00:24:44.844 "length": 8192 00:24:44.844 }, 00:24:44.844 "queue_depth": 128, 00:24:44.844 "io_size": 4096, 00:24:44.844 "runtime": 10.017946, 00:24:44.844 "iops": 5381.342642493781, 00:24:44.844 "mibps": 21.02086969724133, 00:24:44.844 "io_failed": 0, 00:24:44.844 "io_timeout": 0, 00:24:44.844 "avg_latency_us": 23750.582934025846, 00:24:44.844 "min_latency_us": 4642.133333333333, 00:24:44.844 "max_latency_us": 24794.453333333335 00:24:44.844 } 00:24:44.844 ], 00:24:44.844 "core_count": 1 00:24:44.844 } 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:44.844 nvmf_trace.0 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3926520 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3926520 ']' 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3926520 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3926520 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3926520' 00:24:44.844 killing process with pid 3926520 00:24:44.844 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3926520 00:24:44.844 Received shutdown signal, test time was about 10.000000 seconds 00:24:44.845 00:24:44.845 Latency(us) 00:24:44.845 [2024-11-06T09:16:48.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.845 [2024-11-06T09:16:48.346Z] =================================================================================================================== 00:24:44.845 [2024-11-06T09:16:48.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.845 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3926520 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.105 rmmod nvme_tcp 00:24:45.105 rmmod nvme_fabrics 00:24:45.105 rmmod nvme_keyring 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3926315 ']' 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3926315 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3926315 ']' 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3926315 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3926315 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3926315' 00:24:45.105 killing process with pid 3926315 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3926315 00:24:45.105 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3926315 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.366 10:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.277 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:47.277 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Srj 00:24:47.277 00:24:47.277 real 0m24.165s 00:24:47.277 user 0m25.027s 00:24:47.277 sys 0m10.379s 00:24:47.277 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:47.277 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:47.277 ************************************ 00:24:47.277 END TEST nvmf_fips 00:24:47.277 ************************************ 00:24:47.277 10:16:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:47.277 10:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:47.277 10:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:47.277 10:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:47.540 ************************************ 00:24:47.540 START TEST nvmf_control_msg_list 00:24:47.540 ************************************ 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:47.540 * Looking for test storage... 00:24:47.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.540 --rc genhtml_branch_coverage=1 00:24:47.540 --rc genhtml_function_coverage=1 00:24:47.540 --rc genhtml_legend=1 00:24:47.540 --rc geninfo_all_blocks=1 00:24:47.540 --rc geninfo_unexecuted_blocks=1 00:24:47.540 00:24:47.540 ' 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.540 --rc genhtml_branch_coverage=1 00:24:47.540 --rc genhtml_function_coverage=1 00:24:47.540 --rc genhtml_legend=1 00:24:47.540 --rc geninfo_all_blocks=1 00:24:47.540 --rc geninfo_unexecuted_blocks=1 00:24:47.540 00:24:47.540 ' 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.540 --rc genhtml_branch_coverage=1 00:24:47.540 --rc genhtml_function_coverage=1 00:24:47.540 --rc genhtml_legend=1 00:24:47.540 --rc geninfo_all_blocks=1 00:24:47.540 --rc geninfo_unexecuted_blocks=1 00:24:47.540 00:24:47.540 ' 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.540 --rc genhtml_branch_coverage=1 00:24:47.540 --rc genhtml_function_coverage=1 00:24:47.540 --rc genhtml_legend=1 00:24:47.540 --rc geninfo_all_blocks=1 00:24:47.540 --rc geninfo_unexecuted_blocks=1 00:24:47.540 00:24:47.540 ' 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.540 10:16:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.540 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:47.541 10:16:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:55.680 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:55.680 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:55.680 Found net devices under 0000:31:00.0: cvl_0_0 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:55.680 Found net devices under 0000:31:00.1: cvl_0_1 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:55.680 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:55.681 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:55.942 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.942 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.942 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.942 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.942 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:55.942 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:24:56.203 00:24:56.203 --- 10.0.0.2 ping statistics --- 00:24:56.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.203 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:24:56.203 00:24:56.203 --- 10.0.0.1 ping statistics --- 00:24:56.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.203 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3933512 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3933512 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3933512 ']' 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:56.203 10:16:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:56.203 [2024-11-06 10:16:59.595152] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:56.203 [2024-11-06 10:16:59.595220] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.203 [2024-11-06 10:16:59.685896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.463 [2024-11-06 10:16:59.726104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.463 [2024-11-06 10:16:59.726146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.463 [2024-11-06 10:16:59.726154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.463 [2024-11-06 10:16:59.726160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.463 [2024-11-06 10:16:59.726166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.463 [2024-11-06 10:16:59.726754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.033 [2024-11-06 10:17:00.432064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.033 Malloc0 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.033 [2024-11-06 10:17:00.466847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3933725 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3933726 00:24:57.033 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3933727 00:24:57.034 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3933725 00:24:57.034 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.034 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.034 10:17:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.293 [2024-11-06 10:17:00.535300] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:57.293 [2024-11-06 10:17:00.545214] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:57.293 [2024-11-06 10:17:00.555209] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:58.235 Initializing NVMe Controllers 00:24:58.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:58.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:58.235 Initialization complete. Launching workers. 00:24:58.235 ======================================================== 00:24:58.235 Latency(us) 00:24:58.235 Device Information : IOPS MiB/s Average min max 00:24:58.235 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40915.15 40795.25 41188.43 00:24:58.235 ======================================================== 00:24:58.235 Total : 25.00 0.10 40915.15 40795.25 41188.43 00:24:58.235 00:24:58.235 Initializing NVMe Controllers 00:24:58.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:58.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:58.235 Initialization complete. Launching workers. 00:24:58.235 ======================================================== 00:24:58.235 Latency(us) 00:24:58.235 Device Information : IOPS MiB/s Average min max 00:24:58.235 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40917.57 40845.82 41243.64 00:24:58.235 ======================================================== 00:24:58.235 Total : 25.00 0.10 40917.57 40845.82 41243.64 00:24:58.235 00:24:58.496 Initializing NVMe Controllers 00:24:58.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:58.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:58.496 Initialization complete. Launching workers. 00:24:58.496 ======================================================== 00:24:58.496 Latency(us) 00:24:58.496 Device Information : IOPS MiB/s Average min max 00:24:58.496 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40905.70 40781.98 41030.26 00:24:58.496 ======================================================== 00:24:58.496 Total : 25.00 0.10 40905.70 40781.98 41030.26 00:24:58.496 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3933726 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3933727 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:58.496 rmmod nvme_tcp 00:24:58.496 rmmod nvme_fabrics 00:24:58.496 rmmod nvme_keyring 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3933512 ']' 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3933512 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3933512 ']' 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3933512 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3933512 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3933512' 00:24:58.496 killing process with pid 3933512 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3933512 00:24:58.496 10:17:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3933512 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.757 10:17:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.668 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.668 00:25:00.668 real 0m13.386s 00:25:00.668 user 0m8.314s 00:25:00.668 sys 0m7.270s 00:25:00.668 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:00.668 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.668 ************************************ 00:25:00.668 END TEST nvmf_control_msg_list 00:25:00.668 ************************************ 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:00.929 ************************************ 00:25:00.929 START TEST nvmf_wait_for_buf 00:25:00.929 ************************************ 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:00.929 * Looking for test storage... 00:25:00.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.929 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:01.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.191 --rc genhtml_branch_coverage=1 00:25:01.191 --rc genhtml_function_coverage=1 00:25:01.191 --rc genhtml_legend=1 00:25:01.191 --rc geninfo_all_blocks=1 00:25:01.191 --rc geninfo_unexecuted_blocks=1 00:25:01.191 00:25:01.191 ' 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:01.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.191 --rc genhtml_branch_coverage=1 00:25:01.191 --rc genhtml_function_coverage=1 00:25:01.191 --rc genhtml_legend=1 00:25:01.191 --rc geninfo_all_blocks=1 00:25:01.191 --rc geninfo_unexecuted_blocks=1 00:25:01.191 00:25:01.191 ' 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:01.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.191 --rc genhtml_branch_coverage=1 00:25:01.191 --rc genhtml_function_coverage=1 00:25:01.191 --rc genhtml_legend=1 00:25:01.191 --rc geninfo_all_blocks=1 00:25:01.191 --rc geninfo_unexecuted_blocks=1 00:25:01.191 00:25:01.191 ' 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:01.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.191 --rc genhtml_branch_coverage=1 00:25:01.191 --rc genhtml_function_coverage=1 00:25:01.191 --rc genhtml_legend=1 00:25:01.191 --rc geninfo_all_blocks=1 00:25:01.191 --rc geninfo_unexecuted_blocks=1 00:25:01.191 00:25:01.191 ' 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:01.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:01.191 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:01.192 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:01.192 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.192 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.192 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.192 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:01.192 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:01.192 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:01.192 10:17:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:09.330 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:09.330 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:09.330 Found net devices under 0000:31:00.0: cvl_0_0 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:09.330 Found net devices under 0000:31:00.1: cvl_0_1 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.330 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.331 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:09.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:25:09.591 00:25:09.591 --- 10.0.0.2 ping statistics --- 00:25:09.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.591 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:09.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:25:09.591 00:25:09.591 --- 10.0.0.1 ping statistics --- 00:25:09.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.591 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:09.591 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3938748 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3938748 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3938748 ']' 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:09.592 10:17:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:09.592 [2024-11-06 10:17:12.959482] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:09.592 [2024-11-06 10:17:12.959550] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.592 [2024-11-06 10:17:13.049190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.592 [2024-11-06 10:17:13.089102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.592 [2024-11-06 10:17:13.089140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.592 [2024-11-06 10:17:13.089148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.592 [2024-11-06 10:17:13.089155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.592 [2024-11-06 10:17:13.089160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.592 [2024-11-06 10:17:13.089791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.533 Malloc0 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.533 [2024-11-06 10:17:13.892489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.533 [2024-11-06 10:17:13.928720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.533 10:17:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:10.533 [2024-11-06 10:17:14.029680] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:12.053 Initializing NVMe Controllers 00:25:12.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:12.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:12.053 Initialization complete. Launching workers. 00:25:12.053 ======================================================== 00:25:12.053 Latency(us) 00:25:12.053 Device Information : IOPS MiB/s Average min max 00:25:12.053 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32295.64 8001.28 63852.28 00:25:12.053 ======================================================== 00:25:12.053 Total : 129.00 16.12 32295.64 8001.28 63852.28 00:25:12.053 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.053 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.053 rmmod nvme_tcp 00:25:12.314 rmmod nvme_fabrics 00:25:12.314 rmmod nvme_keyring 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3938748 ']' 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3938748 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3938748 ']' 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3938748 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3938748 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3938748' 00:25:12.314 killing process with pid 3938748 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3938748 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3938748 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:12.314 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:12.315 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:12.315 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:12.315 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:12.315 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:12.315 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:12.315 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:12.315 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.315 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.315 10:17:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.860 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:14.860 00:25:14.860 real 0m13.637s 00:25:14.860 user 0m5.326s 00:25:14.860 sys 0m6.863s 00:25:14.860 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:14.860 10:17:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.860 ************************************ 00:25:14.860 END TEST nvmf_wait_for_buf 00:25:14.860 ************************************ 00:25:14.860 10:17:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:14.860 10:17:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:14.860 10:17:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:25:14.860 10:17:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:25:14.860 10:17:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.860 10:17:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:22.999 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:22.999 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:22.999 Found net devices under 0000:31:00.0: cvl_0_0 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.999 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.000 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.000 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:23.000 Found net devices under 0000:31:00.1: cvl_0_1 00:25:23.000 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.000 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:23.000 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.000 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:25:23.000 10:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:23.000 10:17:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:23.000 10:17:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:23.000 10:17:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:23.000 ************************************ 00:25:23.000 START TEST nvmf_perf_adq 00:25:23.000 ************************************ 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:23.000 * Looking for test storage... 00:25:23.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:23.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.000 --rc genhtml_branch_coverage=1 00:25:23.000 --rc genhtml_function_coverage=1 00:25:23.000 --rc genhtml_legend=1 00:25:23.000 --rc geninfo_all_blocks=1 00:25:23.000 --rc geninfo_unexecuted_blocks=1 00:25:23.000 00:25:23.000 ' 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:23.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.000 --rc genhtml_branch_coverage=1 00:25:23.000 --rc genhtml_function_coverage=1 00:25:23.000 --rc genhtml_legend=1 00:25:23.000 --rc geninfo_all_blocks=1 00:25:23.000 --rc geninfo_unexecuted_blocks=1 00:25:23.000 00:25:23.000 ' 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:23.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.000 --rc genhtml_branch_coverage=1 00:25:23.000 --rc genhtml_function_coverage=1 00:25:23.000 --rc genhtml_legend=1 00:25:23.000 --rc geninfo_all_blocks=1 00:25:23.000 --rc geninfo_unexecuted_blocks=1 00:25:23.000 00:25:23.000 ' 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:23.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.000 --rc genhtml_branch_coverage=1 00:25:23.000 --rc genhtml_function_coverage=1 00:25:23.000 --rc genhtml_legend=1 00:25:23.000 --rc geninfo_all_blocks=1 00:25:23.000 --rc geninfo_unexecuted_blocks=1 00:25:23.000 00:25:23.000 ' 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.000 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.001 10:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:31.139 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:31.139 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:31.139 Found net devices under 0000:31:00.0: cvl_0_0 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:31.139 Found net devices under 0000:31:00.1: cvl_0_1 00:25:31.139 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.140 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:31.140 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.140 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:31.140 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:31.140 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:25:31.140 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:31.140 10:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:32.522 10:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:34.429 10:17:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:39.722 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:39.722 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:39.722 Found net devices under 0000:31:00.0: cvl_0_0 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:39.722 Found net devices under 0000:31:00.1: cvl_0_1 00:25:39.722 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:39.723 10:17:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:39.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:25:39.723 00:25:39.723 --- 10.0.0.2 ping statistics --- 00:25:39.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.723 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:25:39.723 00:25:39.723 --- 10.0.0.1 ping statistics --- 00:25:39.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.723 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3950031 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3950031 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3950031 ']' 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:39.723 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.723 [2024-11-06 10:17:43.178030] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:39.723 [2024-11-06 10:17:43.178092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.984 [2024-11-06 10:17:43.265186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:39.984 [2024-11-06 10:17:43.302710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.984 [2024-11-06 10:17:43.302745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.984 [2024-11-06 10:17:43.302753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.984 [2024-11-06 10:17:43.302759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.984 [2024-11-06 10:17:43.302765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.984 [2024-11-06 10:17:43.304269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.984 [2024-11-06 10:17:43.304394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.984 [2024-11-06 10:17:43.304549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.984 [2024-11-06 10:17:43.304551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.555 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:40.555 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:25:40.555 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:40.555 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:40.555 10:17:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.555 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.555 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:25:40.555 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:40.555 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:40.555 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.555 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.555 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.816 [2024-11-06 10:17:44.154374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.816 Malloc1 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:40.816 [2024-11-06 10:17:44.221246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3950384 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:25:40.816 10:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:43.358 10:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:25:43.358 10:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.358 10:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.358 10:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.358 10:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:25:43.358 "tick_rate": 2400000000, 00:25:43.358 "poll_groups": [ 00:25:43.358 { 00:25:43.358 "name": "nvmf_tgt_poll_group_000", 00:25:43.358 "admin_qpairs": 1, 00:25:43.358 "io_qpairs": 1, 00:25:43.358 "current_admin_qpairs": 1, 00:25:43.358 "current_io_qpairs": 1, 00:25:43.358 "pending_bdev_io": 0, 00:25:43.358 "completed_nvme_io": 19945, 00:25:43.358 "transports": [ 00:25:43.358 { 00:25:43.358 "trtype": "TCP" 00:25:43.358 } 00:25:43.358 ] 00:25:43.358 }, 00:25:43.358 { 00:25:43.358 "name": "nvmf_tgt_poll_group_001", 00:25:43.358 "admin_qpairs": 0, 00:25:43.358 "io_qpairs": 1, 00:25:43.358 "current_admin_qpairs": 0, 00:25:43.358 "current_io_qpairs": 1, 00:25:43.358 "pending_bdev_io": 0, 00:25:43.358 "completed_nvme_io": 28760, 00:25:43.358 "transports": [ 00:25:43.358 { 00:25:43.358 "trtype": "TCP" 00:25:43.358 } 00:25:43.358 ] 00:25:43.358 }, 00:25:43.358 { 00:25:43.358 "name": "nvmf_tgt_poll_group_002", 00:25:43.358 "admin_qpairs": 0, 00:25:43.358 "io_qpairs": 1, 00:25:43.358 "current_admin_qpairs": 0, 00:25:43.358 "current_io_qpairs": 1, 00:25:43.358 "pending_bdev_io": 0, 00:25:43.358 "completed_nvme_io": 21424, 00:25:43.358 "transports": [ 00:25:43.358 { 00:25:43.358 "trtype": "TCP" 00:25:43.358 } 00:25:43.358 ] 00:25:43.358 }, 00:25:43.358 { 00:25:43.358 "name": "nvmf_tgt_poll_group_003", 00:25:43.358 "admin_qpairs": 0, 00:25:43.358 "io_qpairs": 1, 00:25:43.358 "current_admin_qpairs": 0, 00:25:43.358 "current_io_qpairs": 1, 00:25:43.358 "pending_bdev_io": 0, 00:25:43.358 "completed_nvme_io": 20627, 00:25:43.358 "transports": [ 00:25:43.358 { 00:25:43.358 "trtype": "TCP" 00:25:43.358 } 00:25:43.358 ] 00:25:43.358 } 00:25:43.358 ] 00:25:43.358 }' 00:25:43.358 10:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:43.358 10:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:25:43.358 10:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:25:43.358 10:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:25:43.358 10:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3950384 00:25:51.495 Initializing NVMe Controllers 00:25:51.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:51.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:51.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:51.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:51.495 Initialization complete. Launching workers. 00:25:51.495 ======================================================== 00:25:51.495 Latency(us) 00:25:51.495 Device Information : IOPS MiB/s Average min max 00:25:51.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11195.50 43.73 5717.45 1616.70 9495.35 00:25:51.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14839.50 57.97 4312.90 1495.74 7618.40 00:25:51.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13656.20 53.34 4686.29 1175.64 9903.29 00:25:51.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13625.30 53.22 4697.19 1428.74 11107.00 00:25:51.495 ======================================================== 00:25:51.495 Total : 53316.50 208.27 4801.67 1175.64 11107.00 00:25:51.495 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:51.495 rmmod nvme_tcp 00:25:51.495 rmmod nvme_fabrics 00:25:51.495 rmmod nvme_keyring 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3950031 ']' 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3950031 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3950031 ']' 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3950031 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3950031 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3950031' 00:25:51.495 killing process with pid 3950031 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3950031 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3950031 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.495 10:17:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.407 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:53.407 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:25:53.407 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:53.407 10:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:55.317 10:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:57.229 10:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:02.521 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:02.521 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.521 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:02.522 Found net devices under 0000:31:00.0: cvl_0_0 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:02.522 Found net devices under 0000:31:00.1: cvl_0_1 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.726 ms 00:26:02.522 00:26:02.522 --- 10.0.0.2 ping statistics --- 00:26:02.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.522 rtt min/avg/max/mdev = 0.726/0.726/0.726/0.000 ms 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:26:02.522 00:26:02.522 --- 10.0.0.1 ping statistics --- 00:26:02.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.522 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:02.522 net.core.busy_poll = 1 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:02.522 net.core.busy_read = 1 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:02.522 10:18:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3954964 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3954964 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3954964 ']' 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:02.785 10:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:02.785 [2024-11-06 10:18:06.193703] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:02.785 [2024-11-06 10:18:06.193771] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.047 [2024-11-06 10:18:06.289990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:03.047 [2024-11-06 10:18:06.331954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.047 [2024-11-06 10:18:06.331990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.047 [2024-11-06 10:18:06.331999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.047 [2024-11-06 10:18:06.332006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.047 [2024-11-06 10:18:06.332011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.047 [2024-11-06 10:18:06.333761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.047 [2024-11-06 10:18:06.333898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.047 [2024-11-06 10:18:06.333998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.047 [2024-11-06 10:18:06.333998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.618 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.879 [2024-11-06 10:18:07.179618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.879 Malloc1 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.879 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.880 [2024-11-06 10:18:07.249250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.880 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.880 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3955309 00:26:03.880 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:26:03.880 10:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:05.793 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:26:05.793 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.793 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.793 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.793 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:26:05.793 "tick_rate": 2400000000, 00:26:05.793 "poll_groups": [ 00:26:05.793 { 00:26:05.793 "name": "nvmf_tgt_poll_group_000", 00:26:05.793 "admin_qpairs": 1, 00:26:05.793 "io_qpairs": 4, 00:26:05.793 "current_admin_qpairs": 1, 00:26:05.793 "current_io_qpairs": 4, 00:26:05.793 "pending_bdev_io": 0, 00:26:05.793 "completed_nvme_io": 34999, 00:26:05.793 "transports": [ 00:26:05.793 { 00:26:05.793 "trtype": "TCP" 00:26:05.793 } 00:26:05.793 ] 00:26:05.793 }, 00:26:05.793 { 00:26:05.793 "name": "nvmf_tgt_poll_group_001", 00:26:05.793 "admin_qpairs": 0, 00:26:05.793 "io_qpairs": 0, 00:26:05.793 "current_admin_qpairs": 0, 00:26:05.793 "current_io_qpairs": 0, 00:26:05.793 "pending_bdev_io": 0, 00:26:05.793 "completed_nvme_io": 0, 00:26:05.793 "transports": [ 00:26:05.793 { 00:26:05.793 "trtype": "TCP" 00:26:05.793 } 00:26:05.793 ] 00:26:05.793 }, 00:26:05.793 { 00:26:05.793 "name": "nvmf_tgt_poll_group_002", 00:26:05.793 "admin_qpairs": 0, 00:26:05.793 "io_qpairs": 0, 00:26:05.793 "current_admin_qpairs": 0, 00:26:05.793 "current_io_qpairs": 0, 00:26:05.793 "pending_bdev_io": 0, 00:26:05.793 "completed_nvme_io": 0, 00:26:05.793 "transports": [ 00:26:05.793 { 00:26:05.793 "trtype": "TCP" 00:26:05.793 } 00:26:05.793 ] 00:26:05.793 }, 00:26:05.793 { 00:26:05.793 "name": "nvmf_tgt_poll_group_003", 00:26:05.793 "admin_qpairs": 0, 00:26:05.793 "io_qpairs": 0, 00:26:05.793 "current_admin_qpairs": 0, 00:26:05.793 "current_io_qpairs": 0, 00:26:05.793 "pending_bdev_io": 0, 00:26:05.793 "completed_nvme_io": 0, 00:26:05.793 "transports": [ 00:26:05.793 { 00:26:05.793 "trtype": "TCP" 00:26:05.793 } 00:26:05.793 ] 00:26:05.793 } 00:26:05.793 ] 00:26:05.793 }' 00:26:05.793 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:05.793 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:26:06.054 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:26:06.054 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:26:06.054 10:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3955309 00:26:14.189 Initializing NVMe Controllers 00:26:14.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:14.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:14.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:14.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:14.189 Initialization complete. Launching workers. 00:26:14.189 ======================================================== 00:26:14.189 Latency(us) 00:26:14.189 Device Information : IOPS MiB/s Average min max 00:26:14.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6382.20 24.93 10027.57 1168.55 58516.54 00:26:14.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6574.80 25.68 9736.76 1186.57 58842.61 00:26:14.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5478.00 21.40 11715.38 1327.41 61207.39 00:26:14.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6509.40 25.43 9833.35 1138.51 55427.30 00:26:14.189 ======================================================== 00:26:14.189 Total : 24944.40 97.44 10270.89 1138.51 61207.39 00:26:14.189 00:26:14.189 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:14.190 rmmod nvme_tcp 00:26:14.190 rmmod nvme_fabrics 00:26:14.190 rmmod nvme_keyring 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3954964 ']' 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3954964 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3954964 ']' 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3954964 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3954964 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3954964' 00:26:14.190 killing process with pid 3954964 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3954964 00:26:14.190 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3954964 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.451 10:18:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.377 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:16.377 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:26:16.377 00:26:16.377 real 0m53.786s 00:26:16.377 user 2m50.254s 00:26:16.377 sys 0m11.569s 00:26:16.377 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:16.377 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.377 ************************************ 00:26:16.377 END TEST nvmf_perf_adq 00:26:16.377 ************************************ 00:26:16.377 10:18:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:16.377 10:18:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:16.377 10:18:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:16.377 10:18:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:16.640 ************************************ 00:26:16.640 START TEST nvmf_shutdown 00:26:16.640 ************************************ 00:26:16.640 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:16.640 * Looking for test storage... 00:26:16.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:16.641 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:16.641 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:26:16.641 10:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.641 --rc genhtml_branch_coverage=1 00:26:16.641 --rc genhtml_function_coverage=1 00:26:16.641 --rc genhtml_legend=1 00:26:16.641 --rc geninfo_all_blocks=1 00:26:16.641 --rc geninfo_unexecuted_blocks=1 00:26:16.641 00:26:16.641 ' 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.641 --rc genhtml_branch_coverage=1 00:26:16.641 --rc genhtml_function_coverage=1 00:26:16.641 --rc genhtml_legend=1 00:26:16.641 --rc geninfo_all_blocks=1 00:26:16.641 --rc geninfo_unexecuted_blocks=1 00:26:16.641 00:26:16.641 ' 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.641 --rc genhtml_branch_coverage=1 00:26:16.641 --rc genhtml_function_coverage=1 00:26:16.641 --rc genhtml_legend=1 00:26:16.641 --rc geninfo_all_blocks=1 00:26:16.641 --rc geninfo_unexecuted_blocks=1 00:26:16.641 00:26:16.641 ' 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.641 --rc genhtml_branch_coverage=1 00:26:16.641 --rc genhtml_function_coverage=1 00:26:16.641 --rc genhtml_legend=1 00:26:16.641 --rc geninfo_all_blocks=1 00:26:16.641 --rc geninfo_unexecuted_blocks=1 00:26:16.641 00:26:16.641 ' 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:16.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:16.641 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:16.642 ************************************ 00:26:16.642 START TEST nvmf_shutdown_tc1 00:26:16.642 ************************************ 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:16.642 10:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:24.786 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.786 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:24.786 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:24.786 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:24.786 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:24.786 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:24.786 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:24.786 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:24.786 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:24.787 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:24.788 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:24.788 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:24.788 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:24.789 Found net devices under 0000:31:00.0: cvl_0_0 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:24.789 Found net devices under 0000:31:00.1: cvl_0_1 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:24.789 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.790 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:25.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:26:25.059 00:26:25.059 --- 10.0.0.2 ping statistics --- 00:26:25.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.059 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:26:25.059 00:26:25.059 --- 10.0.0.1 ping statistics --- 00:26:25.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.059 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:25.059 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3962572 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3962572 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3962572 ']' 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:25.320 10:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:25.320 [2024-11-06 10:18:28.625403] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:25.320 [2024-11-06 10:18:28.625464] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.320 [2024-11-06 10:18:28.712183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:25.320 [2024-11-06 10:18:28.748241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.320 [2024-11-06 10:18:28.748274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.320 [2024-11-06 10:18:28.748282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.320 [2024-11-06 10:18:28.748288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.320 [2024-11-06 10:18:28.748294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.320 [2024-11-06 10:18:28.750036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.320 [2024-11-06 10:18:28.750194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.320 [2024-11-06 10:18:28.750347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.320 [2024-11-06 10:18:28.750349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.261 [2024-11-06 10:18:29.467675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.261 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.261 Malloc1 00:26:26.261 [2024-11-06 10:18:29.585102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.261 Malloc2 00:26:26.261 Malloc3 00:26:26.261 Malloc4 00:26:26.261 Malloc5 00:26:26.261 Malloc6 00:26:26.522 Malloc7 00:26:26.522 Malloc8 00:26:26.522 Malloc9 00:26:26.522 Malloc10 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3962949 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3962949 /var/tmp/bdevperf.sock 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3962949 ']' 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:26.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.522 { 00:26:26.522 "params": { 00:26:26.522 "name": "Nvme$subsystem", 00:26:26.522 "trtype": "$TEST_TRANSPORT", 00:26:26.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.522 "adrfam": "ipv4", 00:26:26.522 "trsvcid": "$NVMF_PORT", 00:26:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.522 "hdgst": ${hdgst:-false}, 00:26:26.522 "ddgst": ${ddgst:-false} 00:26:26.522 }, 00:26:26.522 "method": "bdev_nvme_attach_controller" 00:26:26.522 } 00:26:26.522 EOF 00:26:26.522 )") 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:26.522 10:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.522 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.522 { 00:26:26.522 "params": { 00:26:26.522 "name": "Nvme$subsystem", 00:26:26.522 "trtype": "$TEST_TRANSPORT", 00:26:26.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.522 "adrfam": "ipv4", 00:26:26.522 "trsvcid": "$NVMF_PORT", 00:26:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.522 "hdgst": ${hdgst:-false}, 00:26:26.522 "ddgst": ${ddgst:-false} 00:26:26.522 }, 00:26:26.522 "method": "bdev_nvme_attach_controller" 00:26:26.522 } 00:26:26.522 EOF 00:26:26.522 )") 00:26:26.522 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:26.522 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.522 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.522 { 00:26:26.522 "params": { 00:26:26.522 "name": "Nvme$subsystem", 00:26:26.522 "trtype": "$TEST_TRANSPORT", 00:26:26.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.522 "adrfam": "ipv4", 00:26:26.522 "trsvcid": "$NVMF_PORT", 00:26:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.522 "hdgst": ${hdgst:-false}, 00:26:26.522 "ddgst": ${ddgst:-false} 00:26:26.522 }, 00:26:26.522 "method": "bdev_nvme_attach_controller" 00:26:26.522 } 00:26:26.522 EOF 00:26:26.522 )") 00:26:26.522 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:26.522 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.522 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.522 { 00:26:26.522 "params": { 00:26:26.522 "name": "Nvme$subsystem", 00:26:26.522 "trtype": "$TEST_TRANSPORT", 00:26:26.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.522 "adrfam": "ipv4", 00:26:26.522 "trsvcid": "$NVMF_PORT", 00:26:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.522 "hdgst": ${hdgst:-false}, 00:26:26.522 "ddgst": ${ddgst:-false} 00:26:26.522 }, 00:26:26.522 "method": "bdev_nvme_attach_controller" 00:26:26.522 } 00:26:26.522 EOF 00:26:26.522 )") 00:26:26.522 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:26.788 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.788 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.788 { 00:26:26.788 "params": { 00:26:26.788 "name": "Nvme$subsystem", 00:26:26.788 "trtype": "$TEST_TRANSPORT", 00:26:26.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.788 "adrfam": "ipv4", 00:26:26.788 "trsvcid": "$NVMF_PORT", 00:26:26.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.788 "hdgst": ${hdgst:-false}, 00:26:26.788 "ddgst": ${ddgst:-false} 00:26:26.788 }, 00:26:26.788 "method": "bdev_nvme_attach_controller" 00:26:26.788 } 00:26:26.788 EOF 00:26:26.788 )") 00:26:26.788 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:26.788 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.788 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.788 { 00:26:26.788 "params": { 00:26:26.788 "name": "Nvme$subsystem", 00:26:26.788 "trtype": "$TEST_TRANSPORT", 00:26:26.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.788 "adrfam": "ipv4", 00:26:26.788 "trsvcid": "$NVMF_PORT", 00:26:26.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.789 "hdgst": ${hdgst:-false}, 00:26:26.789 "ddgst": ${ddgst:-false} 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 } 00:26:26.789 EOF 00:26:26.789 )") 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:26.789 [2024-11-06 10:18:30.040278] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:26.789 [2024-11-06 10:18:30.040335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.789 { 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme$subsystem", 00:26:26.789 "trtype": "$TEST_TRANSPORT", 00:26:26.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "$NVMF_PORT", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.789 "hdgst": ${hdgst:-false}, 00:26:26.789 "ddgst": ${ddgst:-false} 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 } 00:26:26.789 EOF 00:26:26.789 )") 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.789 { 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme$subsystem", 00:26:26.789 "trtype": "$TEST_TRANSPORT", 00:26:26.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "$NVMF_PORT", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.789 "hdgst": ${hdgst:-false}, 00:26:26.789 "ddgst": ${ddgst:-false} 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 } 00:26:26.789 EOF 00:26:26.789 )") 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.789 { 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme$subsystem", 00:26:26.789 "trtype": "$TEST_TRANSPORT", 00:26:26.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "$NVMF_PORT", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.789 "hdgst": ${hdgst:-false}, 00:26:26.789 "ddgst": ${ddgst:-false} 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 } 00:26:26.789 EOF 00:26:26.789 )") 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.789 { 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme$subsystem", 00:26:26.789 "trtype": "$TEST_TRANSPORT", 00:26:26.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "$NVMF_PORT", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.789 "hdgst": ${hdgst:-false}, 00:26:26.789 "ddgst": ${ddgst:-false} 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 } 00:26:26.789 EOF 00:26:26.789 )") 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:26.789 10:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme1", 00:26:26.789 "trtype": "tcp", 00:26:26.789 "traddr": "10.0.0.2", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "4420", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:26.789 "hdgst": false, 00:26:26.789 "ddgst": false 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 },{ 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme2", 00:26:26.789 "trtype": "tcp", 00:26:26.789 "traddr": "10.0.0.2", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "4420", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:26.789 "hdgst": false, 00:26:26.789 "ddgst": false 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 },{ 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme3", 00:26:26.789 "trtype": "tcp", 00:26:26.789 "traddr": "10.0.0.2", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "4420", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:26.789 "hdgst": false, 00:26:26.789 "ddgst": false 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 },{ 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme4", 00:26:26.789 "trtype": "tcp", 00:26:26.789 "traddr": "10.0.0.2", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "4420", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:26.789 "hdgst": false, 00:26:26.789 "ddgst": false 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 },{ 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme5", 00:26:26.789 "trtype": "tcp", 00:26:26.789 "traddr": "10.0.0.2", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "4420", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:26.789 "hdgst": false, 00:26:26.789 "ddgst": false 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 },{ 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme6", 00:26:26.789 "trtype": "tcp", 00:26:26.789 "traddr": "10.0.0.2", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "4420", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:26.789 "hdgst": false, 00:26:26.789 "ddgst": false 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 },{ 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme7", 00:26:26.789 "trtype": "tcp", 00:26:26.789 "traddr": "10.0.0.2", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "4420", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:26.789 "hdgst": false, 00:26:26.789 "ddgst": false 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 },{ 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme8", 00:26:26.789 "trtype": "tcp", 00:26:26.789 "traddr": "10.0.0.2", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "4420", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:26.789 "hdgst": false, 00:26:26.789 "ddgst": false 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 },{ 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme9", 00:26:26.789 "trtype": "tcp", 00:26:26.789 "traddr": "10.0.0.2", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "4420", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:26.789 "hdgst": false, 00:26:26.789 "ddgst": false 00:26:26.789 }, 00:26:26.789 "method": "bdev_nvme_attach_controller" 00:26:26.789 },{ 00:26:26.789 "params": { 00:26:26.789 "name": "Nvme10", 00:26:26.789 "trtype": "tcp", 00:26:26.789 "traddr": "10.0.0.2", 00:26:26.789 "adrfam": "ipv4", 00:26:26.789 "trsvcid": "4420", 00:26:26.789 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:26.789 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:26.789 "hdgst": false, 00:26:26.789 "ddgst": false 00:26:26.790 }, 00:26:26.790 "method": "bdev_nvme_attach_controller" 00:26:26.790 }' 00:26:26.790 [2024-11-06 10:18:30.120208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.790 [2024-11-06 10:18:30.156595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.276 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:28.276 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:26:28.276 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:28.276 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.276 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:28.276 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.276 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3962949 00:26:28.276 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:26:28.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3962949 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:28.276 10:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3962572 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.217 { 00:26:29.217 "params": { 00:26:29.217 "name": "Nvme$subsystem", 00:26:29.217 "trtype": "$TEST_TRANSPORT", 00:26:29.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.217 "adrfam": "ipv4", 00:26:29.217 "trsvcid": "$NVMF_PORT", 00:26:29.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.217 "hdgst": ${hdgst:-false}, 00:26:29.217 "ddgst": ${ddgst:-false} 00:26:29.217 }, 00:26:29.217 "method": "bdev_nvme_attach_controller" 00:26:29.217 } 00:26:29.217 EOF 00:26:29.217 )") 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.217 { 00:26:29.217 "params": { 00:26:29.217 "name": "Nvme$subsystem", 00:26:29.217 "trtype": "$TEST_TRANSPORT", 00:26:29.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.217 "adrfam": "ipv4", 00:26:29.217 "trsvcid": "$NVMF_PORT", 00:26:29.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.217 "hdgst": ${hdgst:-false}, 00:26:29.217 "ddgst": ${ddgst:-false} 00:26:29.217 }, 00:26:29.217 "method": "bdev_nvme_attach_controller" 00:26:29.217 } 00:26:29.217 EOF 00:26:29.217 )") 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.217 { 00:26:29.217 "params": { 00:26:29.217 "name": "Nvme$subsystem", 00:26:29.217 "trtype": "$TEST_TRANSPORT", 00:26:29.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.217 "adrfam": "ipv4", 00:26:29.217 "trsvcid": "$NVMF_PORT", 00:26:29.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.217 "hdgst": ${hdgst:-false}, 00:26:29.217 "ddgst": ${ddgst:-false} 00:26:29.217 }, 00:26:29.217 "method": "bdev_nvme_attach_controller" 00:26:29.217 } 00:26:29.217 EOF 00:26:29.217 )") 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.217 { 00:26:29.217 "params": { 00:26:29.217 "name": "Nvme$subsystem", 00:26:29.217 "trtype": "$TEST_TRANSPORT", 00:26:29.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.217 "adrfam": "ipv4", 00:26:29.217 "trsvcid": "$NVMF_PORT", 00:26:29.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.217 "hdgst": ${hdgst:-false}, 00:26:29.217 "ddgst": ${ddgst:-false} 00:26:29.217 }, 00:26:29.217 "method": "bdev_nvme_attach_controller" 00:26:29.217 } 00:26:29.217 EOF 00:26:29.217 )") 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.217 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.217 { 00:26:29.217 "params": { 00:26:29.217 "name": "Nvme$subsystem", 00:26:29.217 "trtype": "$TEST_TRANSPORT", 00:26:29.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.217 "adrfam": "ipv4", 00:26:29.217 "trsvcid": "$NVMF_PORT", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.218 "hdgst": ${hdgst:-false}, 00:26:29.218 "ddgst": ${ddgst:-false} 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 } 00:26:29.218 EOF 00:26:29.218 )") 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.218 { 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme$subsystem", 00:26:29.218 "trtype": "$TEST_TRANSPORT", 00:26:29.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "$NVMF_PORT", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.218 "hdgst": ${hdgst:-false}, 00:26:29.218 "ddgst": ${ddgst:-false} 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 } 00:26:29.218 EOF 00:26:29.218 )") 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.218 { 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme$subsystem", 00:26:29.218 "trtype": "$TEST_TRANSPORT", 00:26:29.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "$NVMF_PORT", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.218 "hdgst": ${hdgst:-false}, 00:26:29.218 "ddgst": ${ddgst:-false} 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 } 00:26:29.218 EOF 00:26:29.218 )") 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.218 { 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme$subsystem", 00:26:29.218 "trtype": "$TEST_TRANSPORT", 00:26:29.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "$NVMF_PORT", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.218 "hdgst": ${hdgst:-false}, 00:26:29.218 "ddgst": ${ddgst:-false} 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 } 00:26:29.218 EOF 00:26:29.218 )") 00:26:29.218 [2024-11-06 10:18:32.412212] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:29.218 [2024-11-06 10:18:32.412265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3963323 ] 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.218 { 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme$subsystem", 00:26:29.218 "trtype": "$TEST_TRANSPORT", 00:26:29.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "$NVMF_PORT", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.218 "hdgst": ${hdgst:-false}, 00:26:29.218 "ddgst": ${ddgst:-false} 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 } 00:26:29.218 EOF 00:26:29.218 )") 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:29.218 { 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme$subsystem", 00:26:29.218 "trtype": "$TEST_TRANSPORT", 00:26:29.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "$NVMF_PORT", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.218 "hdgst": ${hdgst:-false}, 00:26:29.218 "ddgst": ${ddgst:-false} 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 } 00:26:29.218 EOF 00:26:29.218 )") 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:29.218 10:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme1", 00:26:29.218 "trtype": "tcp", 00:26:29.218 "traddr": "10.0.0.2", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "4420", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:29.218 "hdgst": false, 00:26:29.218 "ddgst": false 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 },{ 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme2", 00:26:29.218 "trtype": "tcp", 00:26:29.218 "traddr": "10.0.0.2", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "4420", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:29.218 "hdgst": false, 00:26:29.218 "ddgst": false 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 },{ 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme3", 00:26:29.218 "trtype": "tcp", 00:26:29.218 "traddr": "10.0.0.2", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "4420", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:29.218 "hdgst": false, 00:26:29.218 "ddgst": false 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 },{ 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme4", 00:26:29.218 "trtype": "tcp", 00:26:29.218 "traddr": "10.0.0.2", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "4420", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:29.218 "hdgst": false, 00:26:29.218 "ddgst": false 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 },{ 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme5", 00:26:29.218 "trtype": "tcp", 00:26:29.218 "traddr": "10.0.0.2", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "4420", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:29.218 "hdgst": false, 00:26:29.218 "ddgst": false 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 },{ 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme6", 00:26:29.218 "trtype": "tcp", 00:26:29.218 "traddr": "10.0.0.2", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "4420", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:29.218 "hdgst": false, 00:26:29.218 "ddgst": false 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 },{ 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme7", 00:26:29.218 "trtype": "tcp", 00:26:29.218 "traddr": "10.0.0.2", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "4420", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:29.218 "hdgst": false, 00:26:29.218 "ddgst": false 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 },{ 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme8", 00:26:29.218 "trtype": "tcp", 00:26:29.218 "traddr": "10.0.0.2", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "4420", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:29.218 "hdgst": false, 00:26:29.218 "ddgst": false 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 },{ 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme9", 00:26:29.218 "trtype": "tcp", 00:26:29.218 "traddr": "10.0.0.2", 00:26:29.218 "adrfam": "ipv4", 00:26:29.218 "trsvcid": "4420", 00:26:29.218 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:29.218 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:29.218 "hdgst": false, 00:26:29.218 "ddgst": false 00:26:29.218 }, 00:26:29.218 "method": "bdev_nvme_attach_controller" 00:26:29.218 },{ 00:26:29.218 "params": { 00:26:29.218 "name": "Nvme10", 00:26:29.219 "trtype": "tcp", 00:26:29.219 "traddr": "10.0.0.2", 00:26:29.219 "adrfam": "ipv4", 00:26:29.219 "trsvcid": "4420", 00:26:29.219 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:29.219 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:29.219 "hdgst": false, 00:26:29.219 "ddgst": false 00:26:29.219 }, 00:26:29.219 "method": "bdev_nvme_attach_controller" 00:26:29.219 }' 00:26:29.219 [2024-11-06 10:18:32.492556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.219 [2024-11-06 10:18:32.528495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.603 Running I/O for 1 seconds... 00:26:31.805 1806.00 IOPS, 112.88 MiB/s 00:26:31.805 Latency(us) 00:26:31.805 [2024-11-06T09:18:35.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.805 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.805 Verification LBA range: start 0x0 length 0x400 00:26:31.805 Nvme1n1 : 1.14 224.13 14.01 0.00 0.00 281930.45 18350.08 249910.61 00:26:31.805 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.805 Verification LBA range: start 0x0 length 0x400 00:26:31.805 Nvme2n1 : 1.13 230.07 14.38 0.00 0.00 269451.46 5106.35 248162.99 00:26:31.805 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.805 Verification LBA range: start 0x0 length 0x400 00:26:31.805 Nvme3n1 : 1.13 226.65 14.17 0.00 0.00 269378.77 18131.63 246415.36 00:26:31.805 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.805 Verification LBA range: start 0x0 length 0x400 00:26:31.805 Nvme4n1 : 1.10 234.93 14.68 0.00 0.00 253282.72 7864.32 256901.12 00:26:31.805 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.805 Verification LBA range: start 0x0 length 0x400 00:26:31.805 Nvme5n1 : 1.17 217.95 13.62 0.00 0.00 270937.39 14854.83 276125.01 00:26:31.805 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.805 Verification LBA range: start 0x0 length 0x400 00:26:31.805 Nvme6n1 : 1.14 224.61 14.04 0.00 0.00 257811.20 18786.99 269134.51 00:26:31.805 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.805 Verification LBA range: start 0x0 length 0x400 00:26:31.805 Nvme7n1 : 1.18 271.68 16.98 0.00 0.00 209406.46 15073.28 265639.25 00:26:31.805 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.805 Verification LBA range: start 0x0 length 0x400 00:26:31.805 Nvme8n1 : 1.15 223.37 13.96 0.00 0.00 249831.68 16602.45 256901.12 00:26:31.805 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.805 Verification LBA range: start 0x0 length 0x400 00:26:31.805 Nvme9n1 : 1.18 270.51 16.91 0.00 0.00 203337.05 16165.55 246415.36 00:26:31.805 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:31.805 Verification LBA range: start 0x0 length 0x400 00:26:31.805 Nvme10n1 : 1.19 268.39 16.77 0.00 0.00 201337.09 9393.49 255153.49 00:26:31.805 [2024-11-06T09:18:35.306Z] =================================================================================================================== 00:26:31.805 [2024-11-06T09:18:35.306Z] Total : 2392.29 149.52 0.00 0.00 243798.75 5106.35 276125.01 00:26:31.805 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:31.805 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:31.805 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:31.805 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:31.805 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:31.805 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:31.805 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:31.805 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.805 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:31.805 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.805 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.066 rmmod nvme_tcp 00:26:32.066 rmmod nvme_fabrics 00:26:32.066 rmmod nvme_keyring 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3962572 ']' 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3962572 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3962572 ']' 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3962572 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3962572 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3962572' 00:26:32.066 killing process with pid 3962572 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3962572 00:26:32.066 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3962572 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.327 10:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.238 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:34.238 00:26:34.238 real 0m17.616s 00:26:34.238 user 0m33.908s 00:26:34.238 sys 0m7.427s 00:26:34.238 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:34.238 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:34.238 ************************************ 00:26:34.238 END TEST nvmf_shutdown_tc1 00:26:34.238 ************************************ 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:34.500 ************************************ 00:26:34.500 START TEST nvmf_shutdown_tc2 00:26:34.500 ************************************ 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:34.500 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:34.501 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:34.501 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:34.501 Found net devices under 0000:31:00.0: cvl_0_0 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:34.501 Found net devices under 0000:31:00.1: cvl_0_1 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:34.501 10:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:34.762 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:34.762 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:34.762 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:34.762 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:34.762 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:34.762 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:34.762 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:34.762 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:34.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:26:34.762 00:26:34.762 --- 10.0.0.2 ping statistics --- 00:26:34.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.762 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:26:34.762 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:34.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:26:34.763 00:26:34.763 --- 10.0.0.1 ping statistics --- 00:26:34.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.763 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3964634 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3964634 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3964634 ']' 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:34.763 10:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.024 [2024-11-06 10:18:38.290555] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:35.024 [2024-11-06 10:18:38.290621] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.024 [2024-11-06 10:18:38.390625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.024 [2024-11-06 10:18:38.429521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.024 [2024-11-06 10:18:38.429556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.024 [2024-11-06 10:18:38.429562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.024 [2024-11-06 10:18:38.429567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.024 [2024-11-06 10:18:38.429571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.024 [2024-11-06 10:18:38.431030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.024 [2024-11-06 10:18:38.431272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.024 [2024-11-06 10:18:38.431400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.024 [2024-11-06 10:18:38.431401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.965 [2024-11-06 10:18:39.151397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.965 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.965 Malloc1 00:26:35.965 [2024-11-06 10:18:39.260641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.965 Malloc2 00:26:35.965 Malloc3 00:26:35.965 Malloc4 00:26:35.965 Malloc5 00:26:35.965 Malloc6 00:26:36.229 Malloc7 00:26:36.229 Malloc8 00:26:36.229 Malloc9 00:26:36.229 Malloc10 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3964862 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3964862 /var/tmp/bdevperf.sock 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3964862 ']' 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:36.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.229 { 00:26:36.229 "params": { 00:26:36.229 "name": "Nvme$subsystem", 00:26:36.229 "trtype": "$TEST_TRANSPORT", 00:26:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.229 "adrfam": "ipv4", 00:26:36.229 "trsvcid": "$NVMF_PORT", 00:26:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.229 "hdgst": ${hdgst:-false}, 00:26:36.229 "ddgst": ${ddgst:-false} 00:26:36.229 }, 00:26:36.229 "method": "bdev_nvme_attach_controller" 00:26:36.229 } 00:26:36.229 EOF 00:26:36.229 )") 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.229 { 00:26:36.229 "params": { 00:26:36.229 "name": "Nvme$subsystem", 00:26:36.229 "trtype": "$TEST_TRANSPORT", 00:26:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.229 "adrfam": "ipv4", 00:26:36.229 "trsvcid": "$NVMF_PORT", 00:26:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.229 "hdgst": ${hdgst:-false}, 00:26:36.229 "ddgst": ${ddgst:-false} 00:26:36.229 }, 00:26:36.229 "method": "bdev_nvme_attach_controller" 00:26:36.229 } 00:26:36.229 EOF 00:26:36.229 )") 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.229 { 00:26:36.229 "params": { 00:26:36.229 "name": "Nvme$subsystem", 00:26:36.229 "trtype": "$TEST_TRANSPORT", 00:26:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.229 "adrfam": "ipv4", 00:26:36.229 "trsvcid": "$NVMF_PORT", 00:26:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.229 "hdgst": ${hdgst:-false}, 00:26:36.229 "ddgst": ${ddgst:-false} 00:26:36.229 }, 00:26:36.229 "method": "bdev_nvme_attach_controller" 00:26:36.229 } 00:26:36.229 EOF 00:26:36.229 )") 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.229 { 00:26:36.229 "params": { 00:26:36.229 "name": "Nvme$subsystem", 00:26:36.229 "trtype": "$TEST_TRANSPORT", 00:26:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.229 "adrfam": "ipv4", 00:26:36.229 "trsvcid": "$NVMF_PORT", 00:26:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.229 "hdgst": ${hdgst:-false}, 00:26:36.229 "ddgst": ${ddgst:-false} 00:26:36.229 }, 00:26:36.229 "method": "bdev_nvme_attach_controller" 00:26:36.229 } 00:26:36.229 EOF 00:26:36.229 )") 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.229 { 00:26:36.229 "params": { 00:26:36.229 "name": "Nvme$subsystem", 00:26:36.229 "trtype": "$TEST_TRANSPORT", 00:26:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.229 "adrfam": "ipv4", 00:26:36.229 "trsvcid": "$NVMF_PORT", 00:26:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.229 "hdgst": ${hdgst:-false}, 00:26:36.229 "ddgst": ${ddgst:-false} 00:26:36.229 }, 00:26:36.229 "method": "bdev_nvme_attach_controller" 00:26:36.229 } 00:26:36.229 EOF 00:26:36.229 )") 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.229 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.229 { 00:26:36.229 "params": { 00:26:36.229 "name": "Nvme$subsystem", 00:26:36.229 "trtype": "$TEST_TRANSPORT", 00:26:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.229 "adrfam": "ipv4", 00:26:36.229 "trsvcid": "$NVMF_PORT", 00:26:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.229 "hdgst": ${hdgst:-false}, 00:26:36.229 "ddgst": ${ddgst:-false} 00:26:36.229 }, 00:26:36.229 "method": "bdev_nvme_attach_controller" 00:26:36.229 } 00:26:36.230 EOF 00:26:36.230 )") 00:26:36.230 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.230 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.230 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.230 { 00:26:36.230 "params": { 00:26:36.230 "name": "Nvme$subsystem", 00:26:36.230 "trtype": "$TEST_TRANSPORT", 00:26:36.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.230 "adrfam": "ipv4", 00:26:36.230 "trsvcid": "$NVMF_PORT", 00:26:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.230 "hdgst": ${hdgst:-false}, 00:26:36.230 "ddgst": ${ddgst:-false} 00:26:36.230 }, 00:26:36.230 "method": "bdev_nvme_attach_controller" 00:26:36.230 } 00:26:36.230 EOF 00:26:36.230 )") 00:26:36.230 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.230 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.230 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.230 { 00:26:36.230 "params": { 00:26:36.230 "name": "Nvme$subsystem", 00:26:36.230 "trtype": "$TEST_TRANSPORT", 00:26:36.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.230 "adrfam": "ipv4", 00:26:36.230 "trsvcid": "$NVMF_PORT", 00:26:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.230 "hdgst": ${hdgst:-false}, 00:26:36.230 "ddgst": ${ddgst:-false} 00:26:36.230 }, 00:26:36.230 "method": "bdev_nvme_attach_controller" 00:26:36.230 } 00:26:36.230 EOF 00:26:36.230 )") 00:26:36.230 [2024-11-06 10:18:39.716524] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:36.230 [2024-11-06 10:18:39.716583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3964862 ] 00:26:36.230 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.230 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.230 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.230 { 00:26:36.230 "params": { 00:26:36.230 "name": "Nvme$subsystem", 00:26:36.230 "trtype": "$TEST_TRANSPORT", 00:26:36.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.230 "adrfam": "ipv4", 00:26:36.230 "trsvcid": "$NVMF_PORT", 00:26:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.230 "hdgst": ${hdgst:-false}, 00:26:36.230 "ddgst": ${ddgst:-false} 00:26:36.230 }, 00:26:36.230 "method": "bdev_nvme_attach_controller" 00:26:36.230 } 00:26:36.230 EOF 00:26:36.230 )") 00:26:36.230 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.497 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.497 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.497 { 00:26:36.497 "params": { 00:26:36.497 "name": "Nvme$subsystem", 00:26:36.497 "trtype": "$TEST_TRANSPORT", 00:26:36.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.497 "adrfam": "ipv4", 00:26:36.497 "trsvcid": "$NVMF_PORT", 00:26:36.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.497 "hdgst": ${hdgst:-false}, 00:26:36.497 "ddgst": ${ddgst:-false} 00:26:36.497 }, 00:26:36.497 "method": "bdev_nvme_attach_controller" 00:26:36.497 } 00:26:36.497 EOF 00:26:36.497 )") 00:26:36.497 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:36.497 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:26:36.497 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:26:36.497 10:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:36.497 "params": { 00:26:36.497 "name": "Nvme1", 00:26:36.497 "trtype": "tcp", 00:26:36.497 "traddr": "10.0.0.2", 00:26:36.497 "adrfam": "ipv4", 00:26:36.497 "trsvcid": "4420", 00:26:36.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.497 "hdgst": false, 00:26:36.497 "ddgst": false 00:26:36.497 }, 00:26:36.497 "method": "bdev_nvme_attach_controller" 00:26:36.497 },{ 00:26:36.497 "params": { 00:26:36.497 "name": "Nvme2", 00:26:36.497 "trtype": "tcp", 00:26:36.497 "traddr": "10.0.0.2", 00:26:36.497 "adrfam": "ipv4", 00:26:36.497 "trsvcid": "4420", 00:26:36.497 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:36.497 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:36.497 "hdgst": false, 00:26:36.497 "ddgst": false 00:26:36.497 }, 00:26:36.497 "method": "bdev_nvme_attach_controller" 00:26:36.497 },{ 00:26:36.497 "params": { 00:26:36.497 "name": "Nvme3", 00:26:36.497 "trtype": "tcp", 00:26:36.497 "traddr": "10.0.0.2", 00:26:36.497 "adrfam": "ipv4", 00:26:36.497 "trsvcid": "4420", 00:26:36.497 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:36.497 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:36.497 "hdgst": false, 00:26:36.497 "ddgst": false 00:26:36.497 }, 00:26:36.497 "method": "bdev_nvme_attach_controller" 00:26:36.497 },{ 00:26:36.497 "params": { 00:26:36.497 "name": "Nvme4", 00:26:36.497 "trtype": "tcp", 00:26:36.497 "traddr": "10.0.0.2", 00:26:36.497 "adrfam": "ipv4", 00:26:36.497 "trsvcid": "4420", 00:26:36.497 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:36.497 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:36.497 "hdgst": false, 00:26:36.497 "ddgst": false 00:26:36.497 }, 00:26:36.497 "method": "bdev_nvme_attach_controller" 00:26:36.497 },{ 00:26:36.497 "params": { 00:26:36.497 "name": "Nvme5", 00:26:36.497 "trtype": "tcp", 00:26:36.497 "traddr": "10.0.0.2", 00:26:36.497 "adrfam": "ipv4", 00:26:36.497 "trsvcid": "4420", 00:26:36.497 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:36.497 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:36.497 "hdgst": false, 00:26:36.497 "ddgst": false 00:26:36.497 }, 00:26:36.497 "method": "bdev_nvme_attach_controller" 00:26:36.497 },{ 00:26:36.497 "params": { 00:26:36.497 "name": "Nvme6", 00:26:36.497 "trtype": "tcp", 00:26:36.497 "traddr": "10.0.0.2", 00:26:36.497 "adrfam": "ipv4", 00:26:36.497 "trsvcid": "4420", 00:26:36.497 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:36.497 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:36.497 "hdgst": false, 00:26:36.497 "ddgst": false 00:26:36.497 }, 00:26:36.497 "method": "bdev_nvme_attach_controller" 00:26:36.497 },{ 00:26:36.497 "params": { 00:26:36.497 "name": "Nvme7", 00:26:36.497 "trtype": "tcp", 00:26:36.497 "traddr": "10.0.0.2", 00:26:36.497 "adrfam": "ipv4", 00:26:36.497 "trsvcid": "4420", 00:26:36.497 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:36.497 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:36.497 "hdgst": false, 00:26:36.497 "ddgst": false 00:26:36.497 }, 00:26:36.497 "method": "bdev_nvme_attach_controller" 00:26:36.497 },{ 00:26:36.497 "params": { 00:26:36.497 "name": "Nvme8", 00:26:36.497 "trtype": "tcp", 00:26:36.497 "traddr": "10.0.0.2", 00:26:36.497 "adrfam": "ipv4", 00:26:36.497 "trsvcid": "4420", 00:26:36.497 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:36.497 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:36.497 "hdgst": false, 00:26:36.497 "ddgst": false 00:26:36.497 }, 00:26:36.497 "method": "bdev_nvme_attach_controller" 00:26:36.497 },{ 00:26:36.497 "params": { 00:26:36.497 "name": "Nvme9", 00:26:36.497 "trtype": "tcp", 00:26:36.497 "traddr": "10.0.0.2", 00:26:36.497 "adrfam": "ipv4", 00:26:36.497 "trsvcid": "4420", 00:26:36.497 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:36.497 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:36.497 "hdgst": false, 00:26:36.497 "ddgst": false 00:26:36.497 }, 00:26:36.497 "method": "bdev_nvme_attach_controller" 00:26:36.497 },{ 00:26:36.497 "params": { 00:26:36.497 "name": "Nvme10", 00:26:36.497 "trtype": "tcp", 00:26:36.497 "traddr": "10.0.0.2", 00:26:36.497 "adrfam": "ipv4", 00:26:36.497 "trsvcid": "4420", 00:26:36.497 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:36.497 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:36.497 "hdgst": false, 00:26:36.497 "ddgst": false 00:26:36.497 }, 00:26:36.497 "method": "bdev_nvme_attach_controller" 00:26:36.497 }' 00:26:36.498 [2024-11-06 10:18:39.796834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.498 [2024-11-06 10:18:39.833035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.881 Running I/O for 10 seconds... 00:26:37.881 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:37.881 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:37.881 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:37.881 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.881 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:38.142 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:38.403 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:38.403 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:38.403 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:38.403 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:38.403 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.403 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.403 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.403 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:38.403 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:38.403 10:18:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=146 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 146 -ge 100 ']' 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3964862 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3964862 ']' 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3964862 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:38.663 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3964862 00:26:38.923 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:38.923 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:38.923 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3964862' 00:26:38.923 killing process with pid 3964862 00:26:38.923 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3964862 00:26:38.923 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3964862 00:26:38.923 Received shutdown signal, test time was about 0.980131 seconds 00:26:38.923 00:26:38.923 Latency(us) 00:26:38.923 [2024-11-06T09:18:42.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.923 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:38.923 Verification LBA range: start 0x0 length 0x400 00:26:38.923 Nvme1n1 : 0.97 268.53 16.78 0.00 0.00 234788.81 3522.56 263891.63 00:26:38.923 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:38.923 Verification LBA range: start 0x0 length 0x400 00:26:38.923 Nvme2n1 : 0.97 265.09 16.57 0.00 0.00 233764.69 17476.27 228939.09 00:26:38.923 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:38.923 Verification LBA range: start 0x0 length 0x400 00:26:38.923 Nvme3n1 : 0.98 256.32 16.02 0.00 0.00 236207.89 14636.37 237677.23 00:26:38.923 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:38.923 Verification LBA range: start 0x0 length 0x400 00:26:38.923 Nvme4n1 : 0.97 262.93 16.43 0.00 0.00 226297.39 20534.61 267386.88 00:26:38.923 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:38.923 Verification LBA range: start 0x0 length 0x400 00:26:38.923 Nvme5n1 : 0.94 203.97 12.75 0.00 0.00 284745.67 16930.13 249910.61 00:26:38.923 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:38.923 Verification LBA range: start 0x0 length 0x400 00:26:38.923 Nvme6n1 : 0.95 202.00 12.63 0.00 0.00 281609.96 15073.28 260396.37 00:26:38.923 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:38.923 Verification LBA range: start 0x0 length 0x400 00:26:38.923 Nvme7n1 : 0.97 263.49 16.47 0.00 0.00 211684.69 12014.93 251658.24 00:26:38.923 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:38.923 Verification LBA range: start 0x0 length 0x400 00:26:38.923 Nvme8n1 : 0.96 266.40 16.65 0.00 0.00 204328.32 14745.60 255153.49 00:26:38.923 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:38.923 Verification LBA range: start 0x0 length 0x400 00:26:38.923 Nvme9n1 : 0.96 200.51 12.53 0.00 0.00 264870.68 17039.36 269134.51 00:26:38.923 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:38.923 Verification LBA range: start 0x0 length 0x400 00:26:38.923 Nvme10n1 : 0.95 201.05 12.57 0.00 0.00 258110.58 22391.47 256901.12 00:26:38.923 [2024-11-06T09:18:42.424Z] =================================================================================================================== 00:26:38.923 [2024-11-06T09:18:42.424Z] Total : 2390.30 149.39 0.00 0.00 240452.10 3522.56 269134.51 00:26:38.923 10:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:40.306 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3964634 00:26:40.306 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:40.306 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:40.307 rmmod nvme_tcp 00:26:40.307 rmmod nvme_fabrics 00:26:40.307 rmmod nvme_keyring 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3964634 ']' 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3964634 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3964634 ']' 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3964634 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3964634 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3964634' 00:26:40.307 killing process with pid 3964634 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3964634 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3964634 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.307 10:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:42.852 00:26:42.852 real 0m8.036s 00:26:42.852 user 0m24.336s 00:26:42.852 sys 0m1.270s 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.852 ************************************ 00:26:42.852 END TEST nvmf_shutdown_tc2 00:26:42.852 ************************************ 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:42.852 ************************************ 00:26:42.852 START TEST nvmf_shutdown_tc3 00:26:42.852 ************************************ 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:42.852 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:42.852 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:42.852 Found net devices under 0000:31:00.0: cvl_0_0 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.852 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:42.853 Found net devices under 0000:31:00.1: cvl_0_1 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.853 10:18:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:42.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:26:42.853 00:26:42.853 --- 10.0.0.2 ping statistics --- 00:26:42.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.853 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:26:42.853 00:26:42.853 --- 10.0.0.1 ping statistics --- 00:26:42.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.853 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3966289 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3966289 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3966289 ']' 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:42.853 10:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.114 [2024-11-06 10:18:46.365887] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:43.114 [2024-11-06 10:18:46.365940] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.114 [2024-11-06 10:18:46.465918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:43.114 [2024-11-06 10:18:46.504661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.114 [2024-11-06 10:18:46.504696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.114 [2024-11-06 10:18:46.504702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.114 [2024-11-06 10:18:46.504707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.114 [2024-11-06 10:18:46.504712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.114 [2024-11-06 10:18:46.506205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.114 [2024-11-06 10:18:46.506364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.114 [2024-11-06 10:18:46.506518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.114 [2024-11-06 10:18:46.506520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:43.685 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:43.685 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:26:43.685 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:43.686 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:43.686 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.946 [2024-11-06 10:18:47.210795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.946 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.946 Malloc1 00:26:43.946 [2024-11-06 10:18:47.324714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.946 Malloc2 00:26:43.946 Malloc3 00:26:43.946 Malloc4 00:26:44.207 Malloc5 00:26:44.207 Malloc6 00:26:44.207 Malloc7 00:26:44.207 Malloc8 00:26:44.207 Malloc9 00:26:44.207 Malloc10 00:26:44.207 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.208 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:44.208 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:44.208 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3966672 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3966672 /var/tmp/bdevperf.sock 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3966672 ']' 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:44.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.469 { 00:26:44.469 "params": { 00:26:44.469 "name": "Nvme$subsystem", 00:26:44.469 "trtype": "$TEST_TRANSPORT", 00:26:44.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.469 "adrfam": "ipv4", 00:26:44.469 "trsvcid": "$NVMF_PORT", 00:26:44.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.469 "hdgst": ${hdgst:-false}, 00:26:44.469 "ddgst": ${ddgst:-false} 00:26:44.469 }, 00:26:44.469 "method": "bdev_nvme_attach_controller" 00:26:44.469 } 00:26:44.469 EOF 00:26:44.469 )") 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.469 { 00:26:44.469 "params": { 00:26:44.469 "name": "Nvme$subsystem", 00:26:44.469 "trtype": "$TEST_TRANSPORT", 00:26:44.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.469 "adrfam": "ipv4", 00:26:44.469 "trsvcid": "$NVMF_PORT", 00:26:44.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.469 "hdgst": ${hdgst:-false}, 00:26:44.469 "ddgst": ${ddgst:-false} 00:26:44.469 }, 00:26:44.469 "method": "bdev_nvme_attach_controller" 00:26:44.469 } 00:26:44.469 EOF 00:26:44.469 )") 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.469 { 00:26:44.469 "params": { 00:26:44.469 "name": "Nvme$subsystem", 00:26:44.469 "trtype": "$TEST_TRANSPORT", 00:26:44.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.469 "adrfam": "ipv4", 00:26:44.469 "trsvcid": "$NVMF_PORT", 00:26:44.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.469 "hdgst": ${hdgst:-false}, 00:26:44.469 "ddgst": ${ddgst:-false} 00:26:44.469 }, 00:26:44.469 "method": "bdev_nvme_attach_controller" 00:26:44.469 } 00:26:44.469 EOF 00:26:44.469 )") 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.469 { 00:26:44.469 "params": { 00:26:44.469 "name": "Nvme$subsystem", 00:26:44.469 "trtype": "$TEST_TRANSPORT", 00:26:44.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.469 "adrfam": "ipv4", 00:26:44.469 "trsvcid": "$NVMF_PORT", 00:26:44.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.469 "hdgst": ${hdgst:-false}, 00:26:44.469 "ddgst": ${ddgst:-false} 00:26:44.469 }, 00:26:44.469 "method": "bdev_nvme_attach_controller" 00:26:44.469 } 00:26:44.469 EOF 00:26:44.469 )") 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.469 { 00:26:44.469 "params": { 00:26:44.469 "name": "Nvme$subsystem", 00:26:44.469 "trtype": "$TEST_TRANSPORT", 00:26:44.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.469 "adrfam": "ipv4", 00:26:44.469 "trsvcid": "$NVMF_PORT", 00:26:44.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.469 "hdgst": ${hdgst:-false}, 00:26:44.469 "ddgst": ${ddgst:-false} 00:26:44.469 }, 00:26:44.469 "method": "bdev_nvme_attach_controller" 00:26:44.469 } 00:26:44.469 EOF 00:26:44.469 )") 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.469 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.469 { 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme$subsystem", 00:26:44.470 "trtype": "$TEST_TRANSPORT", 00:26:44.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "$NVMF_PORT", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.470 "hdgst": ${hdgst:-false}, 00:26:44.470 "ddgst": ${ddgst:-false} 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 } 00:26:44.470 EOF 00:26:44.470 )") 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:44.470 [2024-11-06 10:18:47.776594] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:44.470 [2024-11-06 10:18:47.776647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3966672 ] 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.470 { 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme$subsystem", 00:26:44.470 "trtype": "$TEST_TRANSPORT", 00:26:44.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "$NVMF_PORT", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.470 "hdgst": ${hdgst:-false}, 00:26:44.470 "ddgst": ${ddgst:-false} 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 } 00:26:44.470 EOF 00:26:44.470 )") 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.470 { 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme$subsystem", 00:26:44.470 "trtype": "$TEST_TRANSPORT", 00:26:44.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "$NVMF_PORT", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.470 "hdgst": ${hdgst:-false}, 00:26:44.470 "ddgst": ${ddgst:-false} 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 } 00:26:44.470 EOF 00:26:44.470 )") 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.470 { 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme$subsystem", 00:26:44.470 "trtype": "$TEST_TRANSPORT", 00:26:44.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "$NVMF_PORT", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.470 "hdgst": ${hdgst:-false}, 00:26:44.470 "ddgst": ${ddgst:-false} 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 } 00:26:44.470 EOF 00:26:44.470 )") 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.470 { 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme$subsystem", 00:26:44.470 "trtype": "$TEST_TRANSPORT", 00:26:44.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "$NVMF_PORT", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.470 "hdgst": ${hdgst:-false}, 00:26:44.470 "ddgst": ${ddgst:-false} 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 } 00:26:44.470 EOF 00:26:44.470 )") 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:26:44.470 10:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme1", 00:26:44.470 "trtype": "tcp", 00:26:44.470 "traddr": "10.0.0.2", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "4420", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:44.470 "hdgst": false, 00:26:44.470 "ddgst": false 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 },{ 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme2", 00:26:44.470 "trtype": "tcp", 00:26:44.470 "traddr": "10.0.0.2", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "4420", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:44.470 "hdgst": false, 00:26:44.470 "ddgst": false 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 },{ 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme3", 00:26:44.470 "trtype": "tcp", 00:26:44.470 "traddr": "10.0.0.2", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "4420", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:44.470 "hdgst": false, 00:26:44.470 "ddgst": false 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 },{ 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme4", 00:26:44.470 "trtype": "tcp", 00:26:44.470 "traddr": "10.0.0.2", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "4420", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:44.470 "hdgst": false, 00:26:44.470 "ddgst": false 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 },{ 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme5", 00:26:44.470 "trtype": "tcp", 00:26:44.470 "traddr": "10.0.0.2", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "4420", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:44.470 "hdgst": false, 00:26:44.470 "ddgst": false 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 },{ 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme6", 00:26:44.470 "trtype": "tcp", 00:26:44.470 "traddr": "10.0.0.2", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "4420", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:44.470 "hdgst": false, 00:26:44.470 "ddgst": false 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 },{ 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme7", 00:26:44.470 "trtype": "tcp", 00:26:44.470 "traddr": "10.0.0.2", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "4420", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:44.470 "hdgst": false, 00:26:44.470 "ddgst": false 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 },{ 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme8", 00:26:44.470 "trtype": "tcp", 00:26:44.470 "traddr": "10.0.0.2", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "4420", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:44.470 "hdgst": false, 00:26:44.470 "ddgst": false 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 },{ 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme9", 00:26:44.470 "trtype": "tcp", 00:26:44.470 "traddr": "10.0.0.2", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "4420", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:44.470 "hdgst": false, 00:26:44.470 "ddgst": false 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 },{ 00:26:44.470 "params": { 00:26:44.470 "name": "Nvme10", 00:26:44.470 "trtype": "tcp", 00:26:44.470 "traddr": "10.0.0.2", 00:26:44.470 "adrfam": "ipv4", 00:26:44.470 "trsvcid": "4420", 00:26:44.470 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:44.470 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:44.470 "hdgst": false, 00:26:44.470 "ddgst": false 00:26:44.470 }, 00:26:44.470 "method": "bdev_nvme_attach_controller" 00:26:44.470 }' 00:26:44.470 [2024-11-06 10:18:47.855612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.470 [2024-11-06 10:18:47.892361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.384 Running I/O for 10 seconds... 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:46.955 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3966289 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3966289 ']' 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3966289 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:46.956 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3966289 00:26:47.233 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:47.233 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:47.233 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3966289' 00:26:47.233 killing process with pid 3966289 00:26:47.233 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3966289 00:26:47.233 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3966289 00:26:47.233 [2024-11-06 10:18:50.465891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb0a0 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.233 [2024-11-06 10:18:50.467318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.467323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.467327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.467332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.467336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbd20 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.468789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb570 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.234 [2024-11-06 10:18:50.470161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.470375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceba40 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.235 [2024-11-06 10:18:50.471329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebf10 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.471996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.236 [2024-11-06 10:18:50.472252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec3e0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.472995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b2d0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.473946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b7a0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.474440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.237 [2024-11-06 10:18:50.474454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.474586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.484885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.238 [2024-11-06 10:18:50.484919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.238 [2024-11-06 10:18:50.484918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.484930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.238 [2024-11-06 10:18:50.484938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.484939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.238 [2024-11-06 10:18:50.484945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.484948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.238 [2024-11-06 10:18:50.484952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.484957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.238 [2024-11-06 10:18:50.484962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.484967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.238 [2024-11-06 10:18:50.484969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.484975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-06 10:18:50.484975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.238 the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.484984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.484984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1372b40 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.484989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.484996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-06 10:18:50.485019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:47.238 the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.238 [2024-11-06 10:18:50.485033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.238 [2024-11-06 10:18:50.485039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.238 [2024-11-06 10:18:50.485047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-06 10:18:50.485056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:47.238 the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.238 [2024-11-06 10:18:50.485067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.238 [2024-11-06 10:18:50.485075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.238 [2024-11-06 10:18:50.485086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137b600 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.238 [2024-11-06 10:18:50.485120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.238 [2024-11-06 10:18:50.485131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with the state(6) to be set 00:26:47.238 [2024-11-06 10:18:50.485136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc35e0 is same with [2024-11-06 10:18:50.485135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:26:47.238 id:0 cdw10:00000000 cdw11:00000000 00:26:47.238 [2024-11-06 10:18:50.485145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1328370 is same with the state(6) to be set 00:26:47.239 [2024-11-06 10:18:50.485205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363a10 is same with the state(6) to be set 00:26:47.239 [2024-11-06 10:18:50.485305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1a610 is same with the state(6) to be set 00:26:47.239 [2024-11-06 10:18:50.485403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefe080 is same with the state(6) to be set 00:26:47.239 [2024-11-06 10:18:50.485494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeb00 is same with the state(6) to be set 00:26:47.239 [2024-11-06 10:18:50.485582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec960 is same with the state(6) to be set 00:26:47.239 [2024-11-06 10:18:50.485671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.239 [2024-11-06 10:18:50.485730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.485737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefc840 is same with the state(6) to be set 00:26:47.239 [2024-11-06 10:18:50.486183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.239 [2024-11-06 10:18:50.486205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.486220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.239 [2024-11-06 10:18:50.486228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.486238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.239 [2024-11-06 10:18:50.486246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.486255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.239 [2024-11-06 10:18:50.486263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.486273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.239 [2024-11-06 10:18:50.486281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.486291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.239 [2024-11-06 10:18:50.486298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.486308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.239 [2024-11-06 10:18:50.486316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.486325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.239 [2024-11-06 10:18:50.486333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.486342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.239 [2024-11-06 10:18:50.486350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.486359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.239 [2024-11-06 10:18:50.486367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.239 [2024-11-06 10:18:50.486377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.486988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.486995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.487005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.487012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.487022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.487029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.487039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.487046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.487057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.240 [2024-11-06 10:18:50.487064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.240 [2024-11-06 10:18:50.487074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.487305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.487332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.241 [2024-11-06 10:18:50.488436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.241 [2024-11-06 10:18:50.488893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.241 [2024-11-06 10:18:50.488901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.488910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.488917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.488928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.488935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.488945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.488953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.488962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.488970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.488979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.488986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.488996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.489004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.489013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.489021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.489031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.489038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.489048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.489057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.489067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.489074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.489083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.489091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.489101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.489108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.489118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.489126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.489135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.489143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.489152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.489160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.489170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.496988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.496998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.242 [2024-11-06 10:18:50.497007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.242 [2024-11-06 10:18:50.497015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.497025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.497033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.497043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.497050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.497060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.497068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.497303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1372b40 (9): Bad file descriptor 00:26:47.243 [2024-11-06 10:18:50.497329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137b600 (9): Bad file descriptor 00:26:47.243 [2024-11-06 10:18:50.497344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1328370 (9): Bad file descriptor 00:26:47.243 [2024-11-06 10:18:50.497364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1363a10 (9): Bad file descriptor 00:26:47.243 [2024-11-06 10:18:50.497381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1a610 (9): Bad file descriptor 00:26:47.243 [2024-11-06 10:18:50.497417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.243 [2024-11-06 10:18:50.497428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.497438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.243 [2024-11-06 10:18:50.497446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.497455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.243 [2024-11-06 10:18:50.497463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.497472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.243 [2024-11-06 10:18:50.497480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.497487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1372510 is same with the state(6) to be set 00:26:47.243 [2024-11-06 10:18:50.497506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefe080 (9): Bad file descriptor 00:26:47.243 [2024-11-06 10:18:50.497522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeeeb00 (9): Bad file descriptor 00:26:47.243 [2024-11-06 10:18:50.497537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeec960 (9): Bad file descriptor 00:26:47.243 [2024-11-06 10:18:50.497558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefc840 (9): Bad file descriptor 00:26:47.243 [2024-11-06 10:18:50.499026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.243 [2024-11-06 10:18:50.499505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.243 [2024-11-06 10:18:50.499513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.499986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.499994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.500004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.500012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.500022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.500030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.500039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.500047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.500057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.500065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.500075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.500083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.500093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.500100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.500111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.500118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.500128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.500136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.500147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.500154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.500165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.500175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.500186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.244 [2024-11-06 10:18:50.500194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.244 [2024-11-06 10:18:50.501552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:47.244 [2024-11-06 10:18:50.503145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:47.244 [2024-11-06 10:18:50.503538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.244 [2024-11-06 10:18:50.503560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeeeb00 with addr=10.0.0.2, port=4420 00:26:47.245 [2024-11-06 10:18:50.503570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeb00 is same with the state(6) to be set 00:26:47.245 [2024-11-06 10:18:50.504404] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.245 [2024-11-06 10:18:50.504430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:47.245 [2024-11-06 10:18:50.504904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.245 [2024-11-06 10:18:50.504930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1372b40 with addr=10.0.0.2, port=4420 00:26:47.245 [2024-11-06 10:18:50.504940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1372b40 is same with the state(6) to be set 00:26:47.245 [2024-11-06 10:18:50.504954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeeeb00 (9): Bad file descriptor 00:26:47.245 [2024-11-06 10:18:50.505011] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.245 [2024-11-06 10:18:50.505051] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.245 [2024-11-06 10:18:50.505089] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.245 [2024-11-06 10:18:50.505127] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.245 [2024-11-06 10:18:50.505168] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.245 [2024-11-06 10:18:50.505476] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.245 [2024-11-06 10:18:50.505684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.245 [2024-11-06 10:18:50.505700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1a610 with addr=10.0.0.2, port=4420 00:26:47.245 [2024-11-06 10:18:50.505709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1a610 is same with the state(6) to be set 00:26:47.245 [2024-11-06 10:18:50.505720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1372b40 (9): Bad file descriptor 00:26:47.245 [2024-11-06 10:18:50.505731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:47.245 [2024-11-06 10:18:50.505738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:47.245 [2024-11-06 10:18:50.505750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:47.245 [2024-11-06 10:18:50.505760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:47.245 [2024-11-06 10:18:50.505803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.505814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.505831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.505840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.505849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.505858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.505877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.505885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.505896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.505904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.505914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.505922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.505932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.505940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.505951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.505959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.505969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.505977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.505988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.505996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.245 [2024-11-06 10:18:50.506384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.245 [2024-11-06 10:18:50.506393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.506982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.246 [2024-11-06 10:18:50.506991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.246 [2024-11-06 10:18:50.507000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a9da0 is same with the state(6) to be set 00:26:47.246 [2024-11-06 10:18:50.507119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1a610 (9): Bad file descriptor 00:26:47.246 [2024-11-06 10:18:50.507133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:47.246 [2024-11-06 10:18:50.507142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:47.246 [2024-11-06 10:18:50.507150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:47.246 [2024-11-06 10:18:50.507158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:47.247 [2024-11-06 10:18:50.508419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:47.247 [2024-11-06 10:18:50.508444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:47.247 [2024-11-06 10:18:50.508453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:47.247 [2024-11-06 10:18:50.508462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:47.247 [2024-11-06 10:18:50.508470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:47.247 [2024-11-06 10:18:50.508502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1372510 (9): Bad file descriptor 00:26:47.247 [2024-11-06 10:18:50.508786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.247 [2024-11-06 10:18:50.508803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefc840 with addr=10.0.0.2, port=4420 00:26:47.247 [2024-11-06 10:18:50.508810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefc840 is same with the state(6) to be set 00:26:47.247 [2024-11-06 10:18:50.508842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.508851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.508868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.508877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.508886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.508894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.508904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.508912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.508921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.508930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.508940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.508954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.508964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.508971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.508981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.508988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.508998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.247 [2024-11-06 10:18:50.509492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.247 [2024-11-06 10:18:50.509500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.509984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.509992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.510001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.510009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.510017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a0e0 is same with the state(6) to be set 00:26:47.248 [2024-11-06 10:18:50.511554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.511570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.511580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.511588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.511598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.511606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.511615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.511623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.511635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.511644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.511653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.511661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.511670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.511678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.511687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.511695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.511705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.511712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.511722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.511729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.248 [2024-11-06 10:18:50.511739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.248 [2024-11-06 10:18:50.511746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.511983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.511993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.249 [2024-11-06 10:18:50.512461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.249 [2024-11-06 10:18:50.512469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.512705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.512714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ab150 is same with the state(6) to be set 00:26:47.250 [2024-11-06 10:18:50.513981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.513995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.250 [2024-11-06 10:18:50.514467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.250 [2024-11-06 10:18:50.514477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.514989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.514997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.515007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.515016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.515026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.515034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.515044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.515052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.515062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.515069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.515080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.515088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.251 [2024-11-06 10:18:50.515097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.251 [2024-11-06 10:18:50.515105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.515115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.515124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.515134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.515142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.515151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.515159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.515168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13056b0 is same with the state(6) to be set 00:26:47.252 [2024-11-06 10:18:50.516444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.516987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.516995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.517005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.517013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.517022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.517031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.517041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.517049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.517060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.517067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.517078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.517085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.517096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.517104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.517114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.252 [2024-11-06 10:18:50.517122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.252 [2024-11-06 10:18:50.517132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.517643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.517652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1306c20 is same with the state(6) to be set 00:26:47.253 [2024-11-06 10:18:50.518945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.518963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.518975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.518984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.518995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.519002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.519012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.519019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.519029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.519036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.519046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.519054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.519063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.519070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.519080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.519087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.519096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.519104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.519113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.519121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.519131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.253 [2024-11-06 10:18:50.519138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.253 [2024-11-06 10:18:50.519151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-11-06 10:18:50.519868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.254 [2024-11-06 10:18:50.519878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.519886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.519896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.519904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.519914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.519922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.519931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.519940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.519949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.519958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.519967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.519975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.519985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.519993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.520003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.520011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.520021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.520029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.520039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.520047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.520058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.520067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.520077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.520085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.520094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.520102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.520111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130c1b0 is same with the state(6) to be set 00:26:47.255 [2024-11-06 10:18:50.521369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:47.255 [2024-11-06 10:18:50.521387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:47.255 [2024-11-06 10:18:50.521400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:47.255 [2024-11-06 10:18:50.521412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:47.255 [2024-11-06 10:18:50.521456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefc840 (9): Bad file descriptor 00:26:47.255 [2024-11-06 10:18:50.521518] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:47.255 [2024-11-06 10:18:50.521544] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:26:47.255 [2024-11-06 10:18:50.521616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:47.255 [2024-11-06 10:18:50.521889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.255 [2024-11-06 10:18:50.521906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefe080 with addr=10.0.0.2, port=4420 00:26:47.255 [2024-11-06 10:18:50.521914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefe080 is same with the state(6) to be set 00:26:47.255 [2024-11-06 10:18:50.522252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.255 [2024-11-06 10:18:50.522265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeec960 with addr=10.0.0.2, port=4420 00:26:47.255 [2024-11-06 10:18:50.522273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec960 is same with the state(6) to be set 00:26:47.255 [2024-11-06 10:18:50.522492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.255 [2024-11-06 10:18:50.522506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1328370 with addr=10.0.0.2, port=4420 00:26:47.255 [2024-11-06 10:18:50.522514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1328370 is same with the state(6) to be set 00:26:47.255 [2024-11-06 10:18:50.522841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.255 [2024-11-06 10:18:50.522853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1363a10 with addr=10.0.0.2, port=4420 00:26:47.255 [2024-11-06 10:18:50.522860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363a10 is same with the state(6) to be set 00:26:47.255 [2024-11-06 10:18:50.522873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:47.255 [2024-11-06 10:18:50.522884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:47.255 [2024-11-06 10:18:50.522892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:47.255 [2024-11-06 10:18:50.522900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:47.255 [2024-11-06 10:18:50.523987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.524002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.524013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.524021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.524031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.524038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.524048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.524056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.524065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.524073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.524083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.524091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.524100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.524108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.524118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.524125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.524134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.524142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.524152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.524159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.524169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-11-06 10:18:50.524178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.255 [2024-11-06 10:18:50.524187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.256 [2024-11-06 10:18:50.524920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.256 [2024-11-06 10:18:50.524930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.524937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.524947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.524955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.524965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.524973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.524983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.524991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.525003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.525012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.525022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.525029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.525039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.525047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.525057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.525065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.525076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.525084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.525094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.525102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.525112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.525122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.525132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.525141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.525151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.257 [2024-11-06 10:18:50.525159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.257 [2024-11-06 10:18:50.525168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130ac80 is same with the state(6) to be set 00:26:47.257 [2024-11-06 10:18:50.526951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:47.257 [2024-11-06 10:18:50.526975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:47.257 [2024-11-06 10:18:50.526985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:47.257 task offset: 26624 on job bdev=Nvme1n1 fails 00:26:47.257 00:26:47.257 Latency(us) 00:26:47.257 [2024-11-06T09:18:50.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.257 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.257 Job: Nvme1n1 ended in about 0.93 seconds with error 00:26:47.257 Verification LBA range: start 0x0 length 0x400 00:26:47.257 Nvme1n1 : 0.93 207.15 12.95 69.05 0.00 229009.28 11741.87 249910.61 00:26:47.257 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.257 Job: Nvme2n1 ended in about 0.94 seconds with error 00:26:47.257 Verification LBA range: start 0x0 length 0x400 00:26:47.257 Nvme2n1 : 0.94 136.27 8.52 68.14 0.00 303047.40 19333.12 256901.12 00:26:47.257 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.257 Job: Nvme3n1 ended in about 0.94 seconds with error 00:26:47.257 Verification LBA range: start 0x0 length 0x400 00:26:47.257 Nvme3n1 : 0.94 205.03 12.81 68.34 0.00 221739.52 5734.40 228939.09 00:26:47.257 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.257 Job: Nvme4n1 ended in about 0.94 seconds with error 00:26:47.257 Verification LBA range: start 0x0 length 0x400 00:26:47.257 Nvme4n1 : 0.94 203.83 12.74 67.94 0.00 218176.64 11796.48 263891.63 00:26:47.257 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.257 Job: Nvme5n1 ended in about 0.94 seconds with error 00:26:47.257 Verification LBA range: start 0x0 length 0x400 00:26:47.257 Nvme5n1 : 0.94 135.53 8.47 67.77 0.00 285488.92 19879.25 246415.36 00:26:47.257 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.257 Job: Nvme6n1 ended in about 0.95 seconds with error 00:26:47.257 Verification LBA range: start 0x0 length 0x400 00:26:47.257 Nvme6n1 : 0.95 135.18 8.45 67.59 0.00 279833.32 19988.48 260396.37 00:26:47.257 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.257 Job: Nvme7n1 ended in about 0.93 seconds with error 00:26:47.257 Verification LBA range: start 0x0 length 0x400 00:26:47.257 Nvme7n1 : 0.93 206.21 12.89 68.74 0.00 201017.28 13707.95 248162.99 00:26:47.257 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.257 Job: Nvme8n1 ended in about 0.93 seconds with error 00:26:47.257 Verification LBA range: start 0x0 length 0x400 00:26:47.257 Nvme8n1 : 0.93 206.56 12.91 68.85 0.00 195768.32 15728.64 253405.87 00:26:47.257 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.257 Job: Nvme9n1 ended in about 0.95 seconds with error 00:26:47.257 Verification LBA range: start 0x0 length 0x400 00:26:47.257 Nvme9n1 : 0.95 134.12 8.38 67.06 0.00 262956.37 15073.28 267386.88 00:26:47.257 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.257 Job: Nvme10n1 ended in about 0.95 seconds with error 00:26:47.257 Verification LBA range: start 0x0 length 0x400 00:26:47.257 Nvme10n1 : 0.95 134.83 8.43 67.41 0.00 254762.67 28835.84 263891.63 00:26:47.257 [2024-11-06T09:18:50.758Z] =================================================================================================================== 00:26:47.257 [2024-11-06T09:18:50.758Z] Total : 1704.71 106.54 680.89 0.00 240603.15 5734.40 267386.88 00:26:47.257 [2024-11-06 10:18:50.551544] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:47.257 [2024-11-06 10:18:50.551575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:47.257 [2024-11-06 10:18:50.551900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.257 [2024-11-06 10:18:50.551918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x137b600 with addr=10.0.0.2, port=4420 00:26:47.257 [2024-11-06 10:18:50.551928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137b600 is same with the state(6) to be set 00:26:47.257 [2024-11-06 10:18:50.551942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefe080 (9): Bad file descriptor 00:26:47.257 [2024-11-06 10:18:50.551954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeec960 (9): Bad file descriptor 00:26:47.257 [2024-11-06 10:18:50.551964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1328370 (9): Bad file descriptor 00:26:47.257 [2024-11-06 10:18:50.551974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1363a10 (9): Bad file descriptor 00:26:47.257 [2024-11-06 10:18:50.552418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.257 [2024-11-06 10:18:50.552434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeeeb00 with addr=10.0.0.2, port=4420 00:26:47.257 [2024-11-06 10:18:50.552441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeb00 is same with the state(6) to be set 00:26:47.257 [2024-11-06 10:18:50.552777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.257 [2024-11-06 10:18:50.552788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1372b40 with addr=10.0.0.2, port=4420 00:26:47.257 [2024-11-06 10:18:50.552795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1372b40 is same with the state(6) to be set 00:26:47.257 [2024-11-06 10:18:50.553118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.257 [2024-11-06 10:18:50.553130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1a610 with addr=10.0.0.2, port=4420 00:26:47.257 [2024-11-06 10:18:50.553137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1a610 is same with the state(6) to be set 00:26:47.257 [2024-11-06 10:18:50.553330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.257 [2024-11-06 10:18:50.553342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1372510 with addr=10.0.0.2, port=4420 00:26:47.257 [2024-11-06 10:18:50.553349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1372510 is same with the state(6) to be set 00:26:47.257 [2024-11-06 10:18:50.553358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137b600 (9): Bad file descriptor 00:26:47.257 [2024-11-06 10:18:50.553368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:47.257 [2024-11-06 10:18:50.553376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:47.257 [2024-11-06 10:18:50.553385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:47.257 [2024-11-06 10:18:50.553397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:47.257 [2024-11-06 10:18:50.553406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:47.257 [2024-11-06 10:18:50.553413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:47.257 [2024-11-06 10:18:50.553420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.553427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.553435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.553441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.553449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.553456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.553464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.553471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.553478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.553485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.553536] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:47.258 [2024-11-06 10:18:50.553897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeeeb00 (9): Bad file descriptor 00:26:47.258 [2024-11-06 10:18:50.553913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1372b40 (9): Bad file descriptor 00:26:47.258 [2024-11-06 10:18:50.553923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1a610 (9): Bad file descriptor 00:26:47.258 [2024-11-06 10:18:50.553933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1372510 (9): Bad file descriptor 00:26:47.258 [2024-11-06 10:18:50.553942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.553949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.553956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.553962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.554000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:47.258 [2024-11-06 10:18:50.554012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:47.258 [2024-11-06 10:18:50.554021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:47.258 [2024-11-06 10:18:50.554031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:47.258 [2024-11-06 10:18:50.554041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:47.258 [2024-11-06 10:18:50.554079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.554087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.554099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.554106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.554114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.554120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.554128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.554135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.554142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.554149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.554156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.554163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.554170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.554177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.554184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.554190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.554531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-11-06 10:18:50.554546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefc840 with addr=10.0.0.2, port=4420 00:26:47.258 [2024-11-06 10:18:50.554554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefc840 is same with the state(6) to be set 00:26:47.258 [2024-11-06 10:18:50.554742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-11-06 10:18:50.554754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1363a10 with addr=10.0.0.2, port=4420 00:26:47.258 [2024-11-06 10:18:50.554761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363a10 is same with the state(6) to be set 00:26:47.258 [2024-11-06 10:18:50.555085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-11-06 10:18:50.555097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1328370 with addr=10.0.0.2, port=4420 00:26:47.258 [2024-11-06 10:18:50.555104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1328370 is same with the state(6) to be set 00:26:47.258 [2024-11-06 10:18:50.555439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-11-06 10:18:50.555450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeec960 with addr=10.0.0.2, port=4420 00:26:47.258 [2024-11-06 10:18:50.555457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec960 is same with the state(6) to be set 00:26:47.258 [2024-11-06 10:18:50.555808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.258 [2024-11-06 10:18:50.555819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefe080 with addr=10.0.0.2, port=4420 00:26:47.258 [2024-11-06 10:18:50.555826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefe080 is same with the state(6) to be set 00:26:47.258 [2024-11-06 10:18:50.555876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefc840 (9): Bad file descriptor 00:26:47.258 [2024-11-06 10:18:50.555887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1363a10 (9): Bad file descriptor 00:26:47.258 [2024-11-06 10:18:50.555896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1328370 (9): Bad file descriptor 00:26:47.258 [2024-11-06 10:18:50.555906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeec960 (9): Bad file descriptor 00:26:47.258 [2024-11-06 10:18:50.555916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefe080 (9): Bad file descriptor 00:26:47.258 [2024-11-06 10:18:50.555944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.555951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.555959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.555965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.555973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.555979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.555986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.555993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.556001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.556007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.556014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.556020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.556028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.556035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.556042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.556048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:47.258 [2024-11-06 10:18:50.556056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:47.258 [2024-11-06 10:18:50.556062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:47.258 [2024-11-06 10:18:50.556070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:47.258 [2024-11-06 10:18:50.556076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:47.519 10:18:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3966672 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3966672 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3966672 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:48.462 rmmod nvme_tcp 00:26:48.462 rmmod nvme_fabrics 00:26:48.462 rmmod nvme_keyring 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3966289 ']' 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3966289 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3966289 ']' 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3966289 00:26:48.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3966289) - No such process 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3966289 is not found' 00:26:48.462 Process with pid 3966289 is not found 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.462 10:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.007 10:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:51.007 00:26:51.007 real 0m7.967s 00:26:51.007 user 0m20.025s 00:26:51.007 sys 0m1.233s 00:26:51.007 10:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:51.007 10:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:51.007 ************************************ 00:26:51.007 END TEST nvmf_shutdown_tc3 00:26:51.007 ************************************ 00:26:51.007 10:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:26:51.007 10:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:26:51.007 10:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:51.007 10:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:51.007 10:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:51.007 10:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:51.007 ************************************ 00:26:51.007 START TEST nvmf_shutdown_tc4 00:26:51.007 ************************************ 00:26:51.007 10:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:26:51.007 10:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:51.007 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:51.007 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:51.007 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.007 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:51.007 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:51.007 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:51.007 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.007 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:51.008 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:51.008 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:51.008 Found net devices under 0000:31:00.0: cvl_0_0 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:51.008 Found net devices under 0000:31:00.1: cvl_0_1 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.008 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:51.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:26:51.009 00:26:51.009 --- 10.0.0.2 ping statistics --- 00:26:51.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.009 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:26:51.009 00:26:51.009 --- 10.0.0.1 ping statistics --- 00:26:51.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.009 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3968131 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3968131 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3968131 ']' 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:51.009 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.009 [2024-11-06 10:18:54.459259] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:51.009 [2024-11-06 10:18:54.459333] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.270 [2024-11-06 10:18:54.557748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.270 [2024-11-06 10:18:54.587524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.270 [2024-11-06 10:18:54.587555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.270 [2024-11-06 10:18:54.587561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.270 [2024-11-06 10:18:54.587566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.270 [2024-11-06 10:18:54.587570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.270 [2024-11-06 10:18:54.588796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.270 [2024-11-06 10:18:54.588952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.270 [2024-11-06 10:18:54.589233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.270 [2024-11-06 10:18:54.589234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.270 [2024-11-06 10:18:54.707788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.270 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:51.531 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:51.531 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.531 10:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.531 Malloc1 00:26:51.531 [2024-11-06 10:18:54.829315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.531 Malloc2 00:26:51.531 Malloc3 00:26:51.531 Malloc4 00:26:51.531 Malloc5 00:26:51.531 Malloc6 00:26:51.792 Malloc7 00:26:51.792 Malloc8 00:26:51.792 Malloc9 00:26:51.792 Malloc10 00:26:51.792 10:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.792 10:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:51.792 10:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:51.792 10:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.792 10:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3968189 00:26:51.792 10:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:51.792 10:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:26:51.792 [2024-11-06 10:18:55.287339] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3968131 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3968131 ']' 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3968131 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3968131 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3968131' 00:26:57.085 killing process with pid 3968131 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3968131 00:26:57.085 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3968131 00:26:57.085 [2024-11-06 10:19:00.305315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6310 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67e0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67e0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67e0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67e0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67e0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67e0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67e0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6cb0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6cb0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6cb0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6cb0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6cb0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6cb0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.305856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6cb0 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.306129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5e40 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.306151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5e40 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.306158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5e40 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.306164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5e40 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.306169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5e40 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.306174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5e40 is same with the state(6) to be set 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 [2024-11-06 10:19:00.306642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5120 is same with the state(6) to be set 00:26:57.085 [2024-11-06 10:19:00.306656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5120 is same with the state(6) to be set 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 [2024-11-06 10:19:00.306661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5120 is same with the state(6) to be set 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.085 starting I/O failed: -6 00:26:57.085 Write completed with error (sct=0, sc=8) 00:26:57.086 [2024-11-06 10:19:00.306887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d55f0 is same with the state(6) to be set 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 [2024-11-06 10:19:00.306898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d55f0 is same with starting I/O failed: -6 00:26:57.086 the state(6) to be set 00:26:57.086 [2024-11-06 10:19:00.306916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d55f0 is same with the state(6) to be set 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 [2024-11-06 10:19:00.306921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d55f0 is same with the state(6) to be set 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 [2024-11-06 10:19:00.307105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5ac0 is same with Write completed with error (sct=0, sc=8) 00:26:57.086 the state(6) to be set 00:26:57.086 [2024-11-06 10:19:00.307121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5ac0 is same with the state(6) to be set 00:26:57.086 [2024-11-06 10:19:00.307126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5ac0 is same with the state(6) to be set 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 [2024-11-06 10:19:00.307132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5ac0 is same with the state(6) to be set 00:26:57.086 starting I/O failed: -6 00:26:57.086 [2024-11-06 10:19:00.307138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5ac0 is same with the state(6) to be set 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 [2024-11-06 10:19:00.307146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5ac0 is same with the state(6) to be set 00:26:57.086 [2024-11-06 10:19:00.307151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d5ac0 is same with the state(6) to be set 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 [2024-11-06 10:19:00.307381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4c50 is same with the state(6) to be set 00:26:57.086 starting I/O failed: -6 00:26:57.086 [2024-11-06 10:19:00.307395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4c50 is same with the state(6) to be set 00:26:57.086 [2024-11-06 10:19:00.307401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4c50 is same with the state(6) to be set 00:26:57.086 [2024-11-06 10:19:00.307406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4c50 is same with the state(6) to be set 00:26:57.086 [2024-11-06 10:19:00.307411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4c50 is same with the state(6) to be set 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 [2024-11-06 10:19:00.307416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4c50 is same with the state(6) to be set 00:26:57.086 [2024-11-06 10:19:00.307421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4c50 is same with the state(6) to be set 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 [2024-11-06 10:19:00.307528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.086 starting I/O failed: -6 00:26:57.086 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 [2024-11-06 10:19:00.309951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.087 NVMe io qpair process completion error 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 [2024-11-06 10:19:00.311124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 [2024-11-06 10:19:00.311950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.087 Write completed with error (sct=0, sc=8) 00:26:57.087 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 [2024-11-06 10:19:00.312868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 [2024-11-06 10:19:00.314390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.088 NVMe io qpair process completion error 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 starting I/O failed: -6 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.088 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 [2024-11-06 10:19:00.315522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.089 starting I/O failed: -6 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 [2024-11-06 10:19:00.316371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 [2024-11-06 10:19:00.317303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.089 starting I/O failed: -6 00:26:57.089 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 [2024-11-06 10:19:00.319213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.090 NVMe io qpair process completion error 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 [2024-11-06 10:19:00.320849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.090 starting I/O failed: -6 00:26:57.090 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 [2024-11-06 10:19:00.322394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.091 starting I/O failed: -6 00:26:57.091 starting I/O failed: -6 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 [2024-11-06 10:19:00.324959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.091 NVMe io qpair process completion error 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 starting I/O failed: -6 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.091 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 [2024-11-06 10:19:00.326124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 [2024-11-06 10:19:00.326932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 [2024-11-06 10:19:00.327848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.092 Write completed with error (sct=0, sc=8) 00:26:57.092 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 [2024-11-06 10:19:00.329302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.093 NVMe io qpair process completion error 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 [2024-11-06 10:19:00.330402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.093 starting I/O failed: -6 00:26:57.093 starting I/O failed: -6 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 [2024-11-06 10:19:00.331363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 Write completed with error (sct=0, sc=8) 00:26:57.093 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 [2024-11-06 10:19:00.332309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 [2024-11-06 10:19:00.334868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.094 NVMe io qpair process completion error 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 Write completed with error (sct=0, sc=8) 00:26:57.094 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 [2024-11-06 10:19:00.336169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 [2024-11-06 10:19:00.337021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 [2024-11-06 10:19:00.337958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.095 starting I/O failed: -6 00:26:57.095 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 [2024-11-06 10:19:00.339653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.096 NVMe io qpair process completion error 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 [2024-11-06 10:19:00.340762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 [2024-11-06 10:19:00.341577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.096 Write completed with error (sct=0, sc=8) 00:26:57.096 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 [2024-11-06 10:19:00.342510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 [2024-11-06 10:19:00.344149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.097 NVMe io qpair process completion error 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 Write completed with error (sct=0, sc=8) 00:26:57.097 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 [2024-11-06 10:19:00.345676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 [2024-11-06 10:19:00.346494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 [2024-11-06 10:19:00.347441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.098 Write completed with error (sct=0, sc=8) 00:26:57.098 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 [2024-11-06 10:19:00.350299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.099 NVMe io qpair process completion error 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 [2024-11-06 10:19:00.351845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 [2024-11-06 10:19:00.352700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 Write completed with error (sct=0, sc=8) 00:26:57.099 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 [2024-11-06 10:19:00.353631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 Write completed with error (sct=0, sc=8) 00:26:57.100 starting I/O failed: -6 00:26:57.100 [2024-11-06 10:19:00.355258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:57.100 NVMe io qpair process completion error 00:26:57.100 Initializing NVMe Controllers 00:26:57.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:57.100 Controller IO queue size 128, less than required. 00:26:57.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:26:57.100 Controller IO queue size 128, less than required. 00:26:57.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:26:57.100 Controller IO queue size 128, less than required. 00:26:57.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:26:57.100 Controller IO queue size 128, less than required. 00:26:57.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:26:57.100 Controller IO queue size 128, less than required. 00:26:57.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:26:57.101 Controller IO queue size 128, less than required. 00:26:57.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:26:57.101 Controller IO queue size 128, less than required. 00:26:57.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:26:57.101 Controller IO queue size 128, less than required. 00:26:57.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:26:57.101 Controller IO queue size 128, less than required. 00:26:57.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:26:57.101 Controller IO queue size 128, less than required. 00:26:57.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:57.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:57.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:57.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:57.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:57.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:57.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:57.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:57.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:57.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:57.101 Initialization complete. Launching workers. 00:26:57.101 ======================================================== 00:26:57.101 Latency(us) 00:26:57.101 Device Information : IOPS MiB/s Average min max 00:26:57.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1895.70 81.46 67545.54 646.82 120259.42 00:26:57.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1864.96 80.13 68690.43 807.66 148861.60 00:26:57.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1866.91 80.22 68639.32 681.23 148417.78 00:26:57.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1891.37 81.27 67788.72 689.43 121907.45 00:26:57.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1871.67 80.42 68524.15 850.33 120528.82 00:26:57.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1876.43 80.63 68377.64 655.03 133758.18 00:26:57.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1881.19 80.83 68248.06 885.44 137683.49 00:26:57.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1851.97 79.58 68605.27 774.05 121831.67 00:26:57.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1850.23 79.50 68688.93 697.79 121811.89 00:26:57.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1882.71 80.90 67523.98 668.25 122398.15 00:26:57.101 ======================================================== 00:26:57.101 Total : 18733.14 804.94 68260.27 646.82 148861.60 00:26:57.101 00:26:57.101 [2024-11-06 10:19:00.358678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248f360 is same with the state(6) to be set 00:26:57.101 [2024-11-06 10:19:00.358725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d390 is same with the state(6) to be set 00:26:57.101 [2024-11-06 10:19:00.358756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248e380 is same with the state(6) to be set 00:26:57.101 [2024-11-06 10:19:00.358787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248e9e0 is same with the state(6) to be set 00:26:57.101 [2024-11-06 10:19:00.358816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248e050 is same with the state(6) to be set 00:26:57.101 [2024-11-06 10:19:00.358843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d9f0 is same with the state(6) to be set 00:26:57.101 [2024-11-06 10:19:00.358888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d060 is same with the state(6) to be set 00:26:57.101 [2024-11-06 10:19:00.358919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248f540 is same with the state(6) to be set 00:26:57.101 [2024-11-06 10:19:00.358948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d6c0 is same with the state(6) to be set 00:26:57.101 [2024-11-06 10:19:00.358976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248e6b0 is same with the state(6) to be set 00:26:57.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:57.101 10:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:58.041 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3968189 00:26:58.041 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:26:58.041 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3968189 00:26:58.041 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:58.041 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.041 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3968189 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:58.302 rmmod nvme_tcp 00:26:58.302 rmmod nvme_fabrics 00:26:58.302 rmmod nvme_keyring 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:58.302 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3968131 ']' 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3968131 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3968131 ']' 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3968131 00:26:58.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3968131) - No such process 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3968131 is not found' 00:26:58.303 Process with pid 3968131 is not found 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.303 10:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.220 10:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.220 00:27:00.220 real 0m9.720s 00:27:00.220 user 0m25.490s 00:27:00.220 sys 0m3.948s 00:27:00.220 10:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:00.220 10:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:00.220 ************************************ 00:27:00.220 END TEST nvmf_shutdown_tc4 00:27:00.220 ************************************ 00:27:00.480 10:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:27:00.480 00:27:00.480 real 0m43.875s 00:27:00.480 user 1m43.975s 00:27:00.480 sys 0m14.230s 00:27:00.480 10:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:00.480 10:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:00.480 ************************************ 00:27:00.480 END TEST nvmf_shutdown 00:27:00.480 ************************************ 00:27:00.480 10:19:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:27:00.480 10:19:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:00.480 10:19:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:00.480 10:19:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:00.480 ************************************ 00:27:00.480 START TEST nvmf_nsid 00:27:00.480 ************************************ 00:27:00.480 10:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:27:00.480 * Looking for test storage... 00:27:00.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:00.480 10:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:00.480 10:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:27:00.480 10:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:00.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.742 --rc genhtml_branch_coverage=1 00:27:00.742 --rc genhtml_function_coverage=1 00:27:00.742 --rc genhtml_legend=1 00:27:00.742 --rc geninfo_all_blocks=1 00:27:00.742 --rc geninfo_unexecuted_blocks=1 00:27:00.742 00:27:00.742 ' 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:00.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.742 --rc genhtml_branch_coverage=1 00:27:00.742 --rc genhtml_function_coverage=1 00:27:00.742 --rc genhtml_legend=1 00:27:00.742 --rc geninfo_all_blocks=1 00:27:00.742 --rc geninfo_unexecuted_blocks=1 00:27:00.742 00:27:00.742 ' 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:00.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.742 --rc genhtml_branch_coverage=1 00:27:00.742 --rc genhtml_function_coverage=1 00:27:00.742 --rc genhtml_legend=1 00:27:00.742 --rc geninfo_all_blocks=1 00:27:00.742 --rc geninfo_unexecuted_blocks=1 00:27:00.742 00:27:00.742 ' 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:00.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.742 --rc genhtml_branch_coverage=1 00:27:00.742 --rc genhtml_function_coverage=1 00:27:00.742 --rc genhtml_legend=1 00:27:00.742 --rc geninfo_all_blocks=1 00:27:00.742 --rc geninfo_unexecuted_blocks=1 00:27:00.742 00:27:00.742 ' 00:27:00.742 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:00.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.743 10:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:08.884 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:08.884 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:08.884 Found net devices under 0000:31:00.0: cvl_0_0 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:08.884 Found net devices under 0000:31:00.1: cvl_0_1 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:08.884 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:09.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:27:09.145 00:27:09.145 --- 10.0.0.2 ping statistics --- 00:27:09.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.145 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:27:09.145 00:27:09.145 --- 10.0.0.1 ping statistics --- 00:27:09.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.145 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3974217 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3974217 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3974217 ']' 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:09.145 10:19:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:09.145 [2024-11-06 10:19:12.505645] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:09.145 [2024-11-06 10:19:12.505712] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.145 [2024-11-06 10:19:12.597166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.145 [2024-11-06 10:19:12.637105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.145 [2024-11-06 10:19:12.637143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.145 [2024-11-06 10:19:12.637151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.145 [2024-11-06 10:19:12.637157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.145 [2024-11-06 10:19:12.637163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.145 [2024-11-06 10:19:12.637775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3974489 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:27:10.086 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=d0a92c20-96d0-4622-87b2-660d14de4b27 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=09729e2a-bd76-4f9f-a3ea-9d87e8375d6b 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=7572cedf-45ea-4c0e-8ed2-b4334710d894 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:10.087 null0 00:27:10.087 null1 00:27:10.087 [2024-11-06 10:19:13.397907] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:10.087 [2024-11-06 10:19:13.397958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3974489 ] 00:27:10.087 null2 00:27:10.087 [2024-11-06 10:19:13.403434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.087 [2024-11-06 10:19:13.427649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3974489 /var/tmp/tgt2.sock 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3974489 ']' 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:27:10.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:10.087 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:10.087 [2024-11-06 10:19:13.486676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.087 [2024-11-06 10:19:13.516606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.347 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:10.347 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:27:10.347 10:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:27:10.606 [2024-11-06 10:19:13.974436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.606 [2024-11-06 10:19:13.990549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:27:10.606 nvme0n1 nvme0n2 00:27:10.606 nvme1n1 00:27:10.606 10:19:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:27:10.606 10:19:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:27:10.606 10:19:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:27:11.988 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid d0a92c20-96d0-4622-87b2-660d14de4b27 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d0a92c2096d0462287b2660d14de4b27 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D0A92C2096D0462287B2660D14DE4B27 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ D0A92C2096D0462287B2660D14DE4B27 == \D\0\A\9\2\C\2\0\9\6\D\0\4\6\2\2\8\7\B\2\6\6\0\D\1\4\D\E\4\B\2\7 ]] 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 09729e2a-bd76-4f9f-a3ea-9d87e8375d6b 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=09729e2abd764f9fa3ea9d87e8375d6b 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 09729E2ABD764F9FA3EA9D87E8375D6B 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 09729E2ABD764F9FA3EA9D87E8375D6B == \0\9\7\2\9\E\2\A\B\D\7\6\4\F\9\F\A\3\E\A\9\D\8\7\E\8\3\7\5\D\6\B ]] 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 7572cedf-45ea-4c0e-8ed2-b4334710d894 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7572cedf45ea4c0e8ed2b4334710d894 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7572CEDF45EA4C0E8ED2B4334710D894 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 7572CEDF45EA4C0E8ED2B4334710D894 == \7\5\7\2\C\E\D\F\4\5\E\A\4\C\0\E\8\E\D\2\B\4\3\3\4\7\1\0\D\8\9\4 ]] 00:27:13.371 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3974489 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3974489 ']' 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3974489 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3974489 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3974489' 00:27:13.631 killing process with pid 3974489 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3974489 00:27:13.631 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3974489 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.891 rmmod nvme_tcp 00:27:13.891 rmmod nvme_fabrics 00:27:13.891 rmmod nvme_keyring 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3974217 ']' 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3974217 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3974217 ']' 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3974217 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:27:13.891 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:13.892 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3974217 00:27:13.892 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:13.892 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:13.892 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3974217' 00:27:13.892 killing process with pid 3974217 00:27:13.892 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3974217 00:27:13.892 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3974217 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.152 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.065 10:19:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:16.065 00:27:16.065 real 0m15.670s 00:27:16.065 user 0m11.475s 00:27:16.065 sys 0m7.344s 00:27:16.065 10:19:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:16.065 10:19:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:16.065 ************************************ 00:27:16.065 END TEST nvmf_nsid 00:27:16.065 ************************************ 00:27:16.065 10:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:16.065 00:27:16.065 real 13m25.199s 00:27:16.065 user 27m19.806s 00:27:16.065 sys 4m11.025s 00:27:16.065 10:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:16.065 10:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:16.065 ************************************ 00:27:16.065 END TEST nvmf_target_extra 00:27:16.065 ************************************ 00:27:16.334 10:19:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:16.334 10:19:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:16.334 10:19:19 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:16.334 10:19:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.334 ************************************ 00:27:16.334 START TEST nvmf_host 00:27:16.334 ************************************ 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:16.334 * Looking for test storage... 00:27:16.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:16.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.334 --rc genhtml_branch_coverage=1 00:27:16.334 --rc genhtml_function_coverage=1 00:27:16.334 --rc genhtml_legend=1 00:27:16.334 --rc geninfo_all_blocks=1 00:27:16.334 --rc geninfo_unexecuted_blocks=1 00:27:16.334 00:27:16.334 ' 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:16.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.334 --rc genhtml_branch_coverage=1 00:27:16.334 --rc genhtml_function_coverage=1 00:27:16.334 --rc genhtml_legend=1 00:27:16.334 --rc geninfo_all_blocks=1 00:27:16.334 --rc geninfo_unexecuted_blocks=1 00:27:16.334 00:27:16.334 ' 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:16.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.334 --rc genhtml_branch_coverage=1 00:27:16.334 --rc genhtml_function_coverage=1 00:27:16.334 --rc genhtml_legend=1 00:27:16.334 --rc geninfo_all_blocks=1 00:27:16.334 --rc geninfo_unexecuted_blocks=1 00:27:16.334 00:27:16.334 ' 00:27:16.334 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:16.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.334 --rc genhtml_branch_coverage=1 00:27:16.334 --rc genhtml_function_coverage=1 00:27:16.334 --rc genhtml_legend=1 00:27:16.334 --rc geninfo_all_blocks=1 00:27:16.335 --rc geninfo_unexecuted_blocks=1 00:27:16.335 00:27:16.335 ' 00:27:16.335 10:19:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.335 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:16.335 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.335 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.335 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.335 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.335 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.335 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.335 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.335 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.335 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.600 10:19:19 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:16.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.601 ************************************ 00:27:16.601 START TEST nvmf_multicontroller 00:27:16.601 ************************************ 00:27:16.601 10:19:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:16.601 * Looking for test storage... 00:27:16.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.601 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:16.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.862 --rc genhtml_branch_coverage=1 00:27:16.862 --rc genhtml_function_coverage=1 00:27:16.862 --rc genhtml_legend=1 00:27:16.862 --rc geninfo_all_blocks=1 00:27:16.862 --rc geninfo_unexecuted_blocks=1 00:27:16.862 00:27:16.862 ' 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:16.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.862 --rc genhtml_branch_coverage=1 00:27:16.862 --rc genhtml_function_coverage=1 00:27:16.862 --rc genhtml_legend=1 00:27:16.862 --rc geninfo_all_blocks=1 00:27:16.862 --rc geninfo_unexecuted_blocks=1 00:27:16.862 00:27:16.862 ' 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:16.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.862 --rc genhtml_branch_coverage=1 00:27:16.862 --rc genhtml_function_coverage=1 00:27:16.862 --rc genhtml_legend=1 00:27:16.862 --rc geninfo_all_blocks=1 00:27:16.862 --rc geninfo_unexecuted_blocks=1 00:27:16.862 00:27:16.862 ' 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:16.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.862 --rc genhtml_branch_coverage=1 00:27:16.862 --rc genhtml_function_coverage=1 00:27:16.862 --rc genhtml_legend=1 00:27:16.862 --rc geninfo_all_blocks=1 00:27:16.862 --rc geninfo_unexecuted_blocks=1 00:27:16.862 00:27:16.862 ' 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.862 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:16.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:27:16.863 10:19:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:25.092 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:25.092 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:25.092 Found net devices under 0000:31:00.0: cvl_0_0 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:25.092 Found net devices under 0000:31:00.1: cvl_0_1 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.092 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:27:25.092 00:27:25.092 --- 10.0.0.2 ping statistics --- 00:27:25.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.093 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:27:25.093 00:27:25.093 --- 10.0.0.1 ping statistics --- 00:27:25.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.093 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3980048 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3980048 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3980048 ']' 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:25.093 10:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.355 [2024-11-06 10:19:28.641726] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:25.355 [2024-11-06 10:19:28.641779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.355 [2024-11-06 10:19:28.750242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.355 [2024-11-06 10:19:28.803452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.355 [2024-11-06 10:19:28.803501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.355 [2024-11-06 10:19:28.803509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.355 [2024-11-06 10:19:28.803516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.355 [2024-11-06 10:19:28.803523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.355 [2024-11-06 10:19:28.805454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.355 [2024-11-06 10:19:28.805619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.355 [2024-11-06 10:19:28.805619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.299 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:26.299 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:27:26.299 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:26.299 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:26.299 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.299 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.299 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:26.299 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.299 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.300 [2024-11-06 10:19:29.492981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.300 Malloc0 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.300 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.301 [2024-11-06 10:19:29.558950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.301 [2024-11-06 10:19:29.570881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.301 Malloc1 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.301 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3980399 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3980399 /var/tmp/bdevperf.sock 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3980399 ']' 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:26.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:26.302 10:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.249 NVMe0n1 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.249 1 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.249 request: 00:27:27.249 { 00:27:27.249 "name": "NVMe0", 00:27:27.249 "trtype": "tcp", 00:27:27.249 "traddr": "10.0.0.2", 00:27:27.249 "adrfam": "ipv4", 00:27:27.249 "trsvcid": "4420", 00:27:27.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.249 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:27.249 "hostaddr": "10.0.0.1", 00:27:27.249 "prchk_reftag": false, 00:27:27.249 "prchk_guard": false, 00:27:27.249 "hdgst": false, 00:27:27.249 "ddgst": false, 00:27:27.249 "allow_unrecognized_csi": false, 00:27:27.249 "method": "bdev_nvme_attach_controller", 00:27:27.249 "req_id": 1 00:27:27.249 } 00:27:27.249 Got JSON-RPC error response 00:27:27.249 response: 00:27:27.249 { 00:27:27.249 "code": -114, 00:27:27.249 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:27.249 } 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.249 request: 00:27:27.249 { 00:27:27.249 "name": "NVMe0", 00:27:27.249 "trtype": "tcp", 00:27:27.249 "traddr": "10.0.0.2", 00:27:27.249 "adrfam": "ipv4", 00:27:27.249 "trsvcid": "4420", 00:27:27.249 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:27.249 "hostaddr": "10.0.0.1", 00:27:27.249 "prchk_reftag": false, 00:27:27.249 "prchk_guard": false, 00:27:27.249 "hdgst": false, 00:27:27.249 "ddgst": false, 00:27:27.249 "allow_unrecognized_csi": false, 00:27:27.249 "method": "bdev_nvme_attach_controller", 00:27:27.249 "req_id": 1 00:27:27.249 } 00:27:27.249 Got JSON-RPC error response 00:27:27.249 response: 00:27:27.249 { 00:27:27.249 "code": -114, 00:27:27.249 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:27.249 } 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.249 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.250 request: 00:27:27.250 { 00:27:27.250 "name": "NVMe0", 00:27:27.250 "trtype": "tcp", 00:27:27.250 "traddr": "10.0.0.2", 00:27:27.250 "adrfam": "ipv4", 00:27:27.250 "trsvcid": "4420", 00:27:27.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.250 "hostaddr": "10.0.0.1", 00:27:27.250 "prchk_reftag": false, 00:27:27.250 "prchk_guard": false, 00:27:27.250 "hdgst": false, 00:27:27.250 "ddgst": false, 00:27:27.250 "multipath": "disable", 00:27:27.250 "allow_unrecognized_csi": false, 00:27:27.250 "method": "bdev_nvme_attach_controller", 00:27:27.250 "req_id": 1 00:27:27.250 } 00:27:27.250 Got JSON-RPC error response 00:27:27.250 response: 00:27:27.250 { 00:27:27.250 "code": -114, 00:27:27.250 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:27:27.250 } 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.250 request: 00:27:27.250 { 00:27:27.250 "name": "NVMe0", 00:27:27.250 "trtype": "tcp", 00:27:27.250 "traddr": "10.0.0.2", 00:27:27.250 "adrfam": "ipv4", 00:27:27.250 "trsvcid": "4420", 00:27:27.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.250 "hostaddr": "10.0.0.1", 00:27:27.250 "prchk_reftag": false, 00:27:27.250 "prchk_guard": false, 00:27:27.250 "hdgst": false, 00:27:27.250 "ddgst": false, 00:27:27.250 "multipath": "failover", 00:27:27.250 "allow_unrecognized_csi": false, 00:27:27.250 "method": "bdev_nvme_attach_controller", 00:27:27.250 "req_id": 1 00:27:27.250 } 00:27:27.250 Got JSON-RPC error response 00:27:27.250 response: 00:27:27.250 { 00:27:27.250 "code": -114, 00:27:27.250 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:27.250 } 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.250 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.511 NVMe0n1 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.511 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:27.511 10:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:28.897 { 00:27:28.897 "results": [ 00:27:28.897 { 00:27:28.897 "job": "NVMe0n1", 00:27:28.897 "core_mask": "0x1", 00:27:28.897 "workload": "write", 00:27:28.897 "status": "finished", 00:27:28.897 "queue_depth": 128, 00:27:28.897 "io_size": 4096, 00:27:28.897 "runtime": 1.006154, 00:27:28.897 "iops": 28380.347342454534, 00:27:28.897 "mibps": 110.86073180646302, 00:27:28.897 "io_failed": 0, 00:27:28.897 "io_timeout": 0, 00:27:28.897 "avg_latency_us": 4500.909430922781, 00:27:28.897 "min_latency_us": 2362.0266666666666, 00:27:28.897 "max_latency_us": 8301.226666666667 00:27:28.897 } 00:27:28.897 ], 00:27:28.897 "core_count": 1 00:27:28.897 } 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3980399 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3980399 ']' 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3980399 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3980399 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3980399' 00:27:28.897 killing process with pid 3980399 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3980399 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3980399 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:27:28.897 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:28.897 [2024-11-06 10:19:29.693439] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:28.897 [2024-11-06 10:19:29.693497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980399 ] 00:27:28.897 [2024-11-06 10:19:29.771108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.897 [2024-11-06 10:19:29.807396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.897 [2024-11-06 10:19:30.878555] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 7f774359-5ec1-4c0f-b34e-fb2b5b3ce400 already exists 00:27:28.897 [2024-11-06 10:19:30.878586] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:7f774359-5ec1-4c0f-b34e-fb2b5b3ce400 alias for bdev NVMe1n1 00:27:28.897 [2024-11-06 10:19:30.878595] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:28.897 Running I/O for 1 seconds... 00:27:28.897 28346.00 IOPS, 110.73 MiB/s 00:27:28.897 Latency(us) 00:27:28.897 [2024-11-06T09:19:32.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.897 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:28.897 NVMe0n1 : 1.01 28380.35 110.86 0.00 0.00 4500.91 2362.03 8301.23 00:27:28.897 [2024-11-06T09:19:32.398Z] =================================================================================================================== 00:27:28.897 [2024-11-06T09:19:32.398Z] Total : 28380.35 110.86 0.00 0.00 4500.91 2362.03 8301.23 00:27:28.897 Received shutdown signal, test time was about 1.000000 seconds 00:27:28.897 00:27:28.897 Latency(us) 00:27:28.897 [2024-11-06T09:19:32.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.897 [2024-11-06T09:19:32.398Z] =================================================================================================================== 00:27:28.897 [2024-11-06T09:19:32.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:28.897 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:28.897 rmmod nvme_tcp 00:27:28.897 rmmod nvme_fabrics 00:27:28.897 rmmod nvme_keyring 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3980048 ']' 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3980048 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3980048 ']' 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3980048 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:28.897 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3980048 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3980048' 00:27:29.159 killing process with pid 3980048 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3980048 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3980048 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.159 10:19:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.704 10:19:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:31.704 00:27:31.704 real 0m14.710s 00:27:31.704 user 0m16.587s 00:27:31.704 sys 0m7.058s 00:27:31.704 10:19:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:31.704 10:19:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:31.704 ************************************ 00:27:31.704 END TEST nvmf_multicontroller 00:27:31.704 ************************************ 00:27:31.704 10:19:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:31.704 10:19:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:31.704 10:19:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:31.704 10:19:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.704 ************************************ 00:27:31.704 START TEST nvmf_aer 00:27:31.704 ************************************ 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:31.705 * Looking for test storage... 00:27:31.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:31.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.705 --rc genhtml_branch_coverage=1 00:27:31.705 --rc genhtml_function_coverage=1 00:27:31.705 --rc genhtml_legend=1 00:27:31.705 --rc geninfo_all_blocks=1 00:27:31.705 --rc geninfo_unexecuted_blocks=1 00:27:31.705 00:27:31.705 ' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:31.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.705 --rc genhtml_branch_coverage=1 00:27:31.705 --rc genhtml_function_coverage=1 00:27:31.705 --rc genhtml_legend=1 00:27:31.705 --rc geninfo_all_blocks=1 00:27:31.705 --rc geninfo_unexecuted_blocks=1 00:27:31.705 00:27:31.705 ' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:31.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.705 --rc genhtml_branch_coverage=1 00:27:31.705 --rc genhtml_function_coverage=1 00:27:31.705 --rc genhtml_legend=1 00:27:31.705 --rc geninfo_all_blocks=1 00:27:31.705 --rc geninfo_unexecuted_blocks=1 00:27:31.705 00:27:31.705 ' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:31.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.705 --rc genhtml_branch_coverage=1 00:27:31.705 --rc genhtml_function_coverage=1 00:27:31.705 --rc genhtml_legend=1 00:27:31.705 --rc geninfo_all_blocks=1 00:27:31.705 --rc geninfo_unexecuted_blocks=1 00:27:31.705 00:27:31.705 ' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:31.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:31.705 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:31.706 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:31.706 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.706 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.706 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.706 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:31.706 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:31.706 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.706 10:19:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:39.851 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:39.851 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.851 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:39.852 Found net devices under 0000:31:00.0: cvl_0_0 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:39.852 Found net devices under 0000:31:00.1: cvl_0_1 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.852 10:19:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:39.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:27:39.852 00:27:39.852 --- 10.0.0.2 ping statistics --- 00:27:39.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.852 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:27:39.852 00:27:39.852 --- 10.0.0.1 ping statistics --- 00:27:39.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.852 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3985631 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3985631 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3985631 ']' 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:39.852 10:19:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:39.852 [2024-11-06 10:19:43.320642] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:39.852 [2024-11-06 10:19:43.320707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.113 [2024-11-06 10:19:43.409835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:40.113 [2024-11-06 10:19:43.450616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.113 [2024-11-06 10:19:43.450653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.113 [2024-11-06 10:19:43.450661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.113 [2024-11-06 10:19:43.450668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.113 [2024-11-06 10:19:43.450674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.113 [2024-11-06 10:19:43.452296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.113 [2024-11-06 10:19:43.452409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.113 [2024-11-06 10:19:43.452564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.113 [2024-11-06 10:19:43.452564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.684 [2024-11-06 10:19:44.167403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.684 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.944 Malloc0 00:27:40.944 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.944 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:40.944 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.944 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.944 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.944 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:40.944 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.945 [2024-11-06 10:19:44.234173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:40.945 [ 00:27:40.945 { 00:27:40.945 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:40.945 "subtype": "Discovery", 00:27:40.945 "listen_addresses": [], 00:27:40.945 "allow_any_host": true, 00:27:40.945 "hosts": [] 00:27:40.945 }, 00:27:40.945 { 00:27:40.945 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.945 "subtype": "NVMe", 00:27:40.945 "listen_addresses": [ 00:27:40.945 { 00:27:40.945 "trtype": "TCP", 00:27:40.945 "adrfam": "IPv4", 00:27:40.945 "traddr": "10.0.0.2", 00:27:40.945 "trsvcid": "4420" 00:27:40.945 } 00:27:40.945 ], 00:27:40.945 "allow_any_host": true, 00:27:40.945 "hosts": [], 00:27:40.945 "serial_number": "SPDK00000000000001", 00:27:40.945 "model_number": "SPDK bdev Controller", 00:27:40.945 "max_namespaces": 2, 00:27:40.945 "min_cntlid": 1, 00:27:40.945 "max_cntlid": 65519, 00:27:40.945 "namespaces": [ 00:27:40.945 { 00:27:40.945 "nsid": 1, 00:27:40.945 "bdev_name": "Malloc0", 00:27:40.945 "name": "Malloc0", 00:27:40.945 "nguid": "F030C9C2669B4458BFB22329A07DE811", 00:27:40.945 "uuid": "f030c9c2-669b-4458-bfb2-2329a07de811" 00:27:40.945 } 00:27:40.945 ] 00:27:40.945 } 00:27:40.945 ] 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3985797 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:27:40.945 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.205 Malloc1 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:41.205 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.206 Asynchronous Event Request test 00:27:41.206 Attaching to 10.0.0.2 00:27:41.206 Attached to 10.0.0.2 00:27:41.206 Registering asynchronous event callbacks... 00:27:41.206 Starting namespace attribute notice tests for all controllers... 00:27:41.206 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:41.206 aer_cb - Changed Namespace 00:27:41.206 Cleaning up... 00:27:41.206 [ 00:27:41.206 { 00:27:41.206 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:41.206 "subtype": "Discovery", 00:27:41.206 "listen_addresses": [], 00:27:41.206 "allow_any_host": true, 00:27:41.206 "hosts": [] 00:27:41.206 }, 00:27:41.206 { 00:27:41.206 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:41.206 "subtype": "NVMe", 00:27:41.206 "listen_addresses": [ 00:27:41.206 { 00:27:41.206 "trtype": "TCP", 00:27:41.206 "adrfam": "IPv4", 00:27:41.206 "traddr": "10.0.0.2", 00:27:41.206 "trsvcid": "4420" 00:27:41.206 } 00:27:41.206 ], 00:27:41.206 "allow_any_host": true, 00:27:41.206 "hosts": [], 00:27:41.206 "serial_number": "SPDK00000000000001", 00:27:41.206 "model_number": "SPDK bdev Controller", 00:27:41.206 "max_namespaces": 2, 00:27:41.206 "min_cntlid": 1, 00:27:41.206 "max_cntlid": 65519, 00:27:41.206 "namespaces": [ 00:27:41.206 { 00:27:41.206 "nsid": 1, 00:27:41.206 "bdev_name": "Malloc0", 00:27:41.206 "name": "Malloc0", 00:27:41.206 "nguid": "F030C9C2669B4458BFB22329A07DE811", 00:27:41.206 "uuid": "f030c9c2-669b-4458-bfb2-2329a07de811" 00:27:41.206 }, 00:27:41.206 { 00:27:41.206 "nsid": 2, 00:27:41.206 "bdev_name": "Malloc1", 00:27:41.206 "name": "Malloc1", 00:27:41.206 "nguid": "E90833E19CF44115A91C0D10CF6BD825", 00:27:41.206 "uuid": "e90833e1-9cf4-4115-a91c-0d10cf6bd825" 00:27:41.206 } 00:27:41.206 ] 00:27:41.206 } 00:27:41.206 ] 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3985797 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.206 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.206 rmmod nvme_tcp 00:27:41.466 rmmod nvme_fabrics 00:27:41.466 rmmod nvme_keyring 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3985631 ']' 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3985631 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3985631 ']' 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3985631 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3985631 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3985631' 00:27:41.466 killing process with pid 3985631 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3985631 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3985631 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.466 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.467 10:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:44.009 00:27:44.009 real 0m12.359s 00:27:44.009 user 0m8.459s 00:27:44.009 sys 0m6.756s 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:44.009 ************************************ 00:27:44.009 END TEST nvmf_aer 00:27:44.009 ************************************ 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.009 ************************************ 00:27:44.009 START TEST nvmf_async_init 00:27:44.009 ************************************ 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:44.009 * Looking for test storage... 00:27:44.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:44.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.009 --rc genhtml_branch_coverage=1 00:27:44.009 --rc genhtml_function_coverage=1 00:27:44.009 --rc genhtml_legend=1 00:27:44.009 --rc geninfo_all_blocks=1 00:27:44.009 --rc geninfo_unexecuted_blocks=1 00:27:44.009 00:27:44.009 ' 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:44.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.009 --rc genhtml_branch_coverage=1 00:27:44.009 --rc genhtml_function_coverage=1 00:27:44.009 --rc genhtml_legend=1 00:27:44.009 --rc geninfo_all_blocks=1 00:27:44.009 --rc geninfo_unexecuted_blocks=1 00:27:44.009 00:27:44.009 ' 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:44.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.009 --rc genhtml_branch_coverage=1 00:27:44.009 --rc genhtml_function_coverage=1 00:27:44.009 --rc genhtml_legend=1 00:27:44.009 --rc geninfo_all_blocks=1 00:27:44.009 --rc geninfo_unexecuted_blocks=1 00:27:44.009 00:27:44.009 ' 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:44.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.009 --rc genhtml_branch_coverage=1 00:27:44.009 --rc genhtml_function_coverage=1 00:27:44.009 --rc genhtml_legend=1 00:27:44.009 --rc geninfo_all_blocks=1 00:27:44.009 --rc geninfo_unexecuted_blocks=1 00:27:44.009 00:27:44.009 ' 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:27:44.009 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:44.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5f47c01ddd2e425b89f4fe5df3b07d49 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:27:44.010 10:19:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:52.149 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:52.149 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:52.149 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:52.150 Found net devices under 0000:31:00.0: cvl_0_0 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:52.150 Found net devices under 0000:31:00.1: cvl_0_1 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.150 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:27:52.411 00:27:52.411 --- 10.0.0.2 ping statistics --- 00:27:52.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.411 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:27:52.411 00:27:52.411 --- 10.0.0.1 ping statistics --- 00:27:52.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.411 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3990701 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3990701 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3990701 ']' 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:52.411 10:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.411 [2024-11-06 10:19:55.847094] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:52.411 [2024-11-06 10:19:55.847161] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.671 [2024-11-06 10:19:55.936980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.671 [2024-11-06 10:19:55.977711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.671 [2024-11-06 10:19:55.977750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.671 [2024-11-06 10:19:55.977759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.671 [2024-11-06 10:19:55.977766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.671 [2024-11-06 10:19:55.977773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.671 [2024-11-06 10:19:55.978387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.241 [2024-11-06 10:19:56.687738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.241 null0 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5f47c01ddd2e425b89f4fe5df3b07d49 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.241 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.501 [2024-11-06 10:19:56.748031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.501 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.501 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:53.501 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.501 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.501 nvme0n1 00:27:53.501 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.501 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:53.501 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.501 10:19:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.762 [ 00:27:53.762 { 00:27:53.762 "name": "nvme0n1", 00:27:53.762 "aliases": [ 00:27:53.762 "5f47c01d-dd2e-425b-89f4-fe5df3b07d49" 00:27:53.762 ], 00:27:53.762 "product_name": "NVMe disk", 00:27:53.762 "block_size": 512, 00:27:53.762 "num_blocks": 2097152, 00:27:53.762 "uuid": "5f47c01d-dd2e-425b-89f4-fe5df3b07d49", 00:27:53.762 "numa_id": 0, 00:27:53.762 "assigned_rate_limits": { 00:27:53.762 "rw_ios_per_sec": 0, 00:27:53.762 "rw_mbytes_per_sec": 0, 00:27:53.762 "r_mbytes_per_sec": 0, 00:27:53.762 "w_mbytes_per_sec": 0 00:27:53.762 }, 00:27:53.762 "claimed": false, 00:27:53.762 "zoned": false, 00:27:53.762 "supported_io_types": { 00:27:53.762 "read": true, 00:27:53.762 "write": true, 00:27:53.762 "unmap": false, 00:27:53.762 "flush": true, 00:27:53.762 "reset": true, 00:27:53.762 "nvme_admin": true, 00:27:53.762 "nvme_io": true, 00:27:53.762 "nvme_io_md": false, 00:27:53.762 "write_zeroes": true, 00:27:53.762 "zcopy": false, 00:27:53.762 "get_zone_info": false, 00:27:53.762 "zone_management": false, 00:27:53.762 "zone_append": false, 00:27:53.762 "compare": true, 00:27:53.762 "compare_and_write": true, 00:27:53.762 "abort": true, 00:27:53.762 "seek_hole": false, 00:27:53.762 "seek_data": false, 00:27:53.762 "copy": true, 00:27:53.762 "nvme_iov_md": false 00:27:53.762 }, 00:27:53.762 "memory_domains": [ 00:27:53.762 { 00:27:53.762 "dma_device_id": "system", 00:27:53.762 "dma_device_type": 1 00:27:53.762 } 00:27:53.762 ], 00:27:53.762 "driver_specific": { 00:27:53.762 "nvme": [ 00:27:53.762 { 00:27:53.762 "trid": { 00:27:53.762 "trtype": "TCP", 00:27:53.762 "adrfam": "IPv4", 00:27:53.762 "traddr": "10.0.0.2", 00:27:53.762 "trsvcid": "4420", 00:27:53.762 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:53.762 }, 00:27:53.762 "ctrlr_data": { 00:27:53.762 "cntlid": 1, 00:27:53.762 "vendor_id": "0x8086", 00:27:53.762 "model_number": "SPDK bdev Controller", 00:27:53.762 "serial_number": "00000000000000000000", 00:27:53.762 "firmware_revision": "25.01", 00:27:53.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.762 "oacs": { 00:27:53.762 "security": 0, 00:27:53.762 "format": 0, 00:27:53.762 "firmware": 0, 00:27:53.762 "ns_manage": 0 00:27:53.762 }, 00:27:53.762 "multi_ctrlr": true, 00:27:53.762 "ana_reporting": false 00:27:53.762 }, 00:27:53.762 "vs": { 00:27:53.762 "nvme_version": "1.3" 00:27:53.762 }, 00:27:53.762 "ns_data": { 00:27:53.762 "id": 1, 00:27:53.762 "can_share": true 00:27:53.762 } 00:27:53.762 } 00:27:53.762 ], 00:27:53.762 "mp_policy": "active_passive" 00:27:53.762 } 00:27:53.762 } 00:27:53.762 ] 00:27:53.762 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.762 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:53.762 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.762 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.762 [2024-11-06 10:19:57.022249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:53.762 [2024-11-06 10:19:57.022311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d61e0 (9): Bad file descriptor 00:27:53.762 [2024-11-06 10:19:57.153960] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:27:53.762 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.762 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:53.762 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.762 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.762 [ 00:27:53.762 { 00:27:53.762 "name": "nvme0n1", 00:27:53.762 "aliases": [ 00:27:53.762 "5f47c01d-dd2e-425b-89f4-fe5df3b07d49" 00:27:53.762 ], 00:27:53.762 "product_name": "NVMe disk", 00:27:53.762 "block_size": 512, 00:27:53.762 "num_blocks": 2097152, 00:27:53.762 "uuid": "5f47c01d-dd2e-425b-89f4-fe5df3b07d49", 00:27:53.762 "numa_id": 0, 00:27:53.762 "assigned_rate_limits": { 00:27:53.762 "rw_ios_per_sec": 0, 00:27:53.762 "rw_mbytes_per_sec": 0, 00:27:53.762 "r_mbytes_per_sec": 0, 00:27:53.762 "w_mbytes_per_sec": 0 00:27:53.762 }, 00:27:53.762 "claimed": false, 00:27:53.762 "zoned": false, 00:27:53.762 "supported_io_types": { 00:27:53.762 "read": true, 00:27:53.762 "write": true, 00:27:53.762 "unmap": false, 00:27:53.762 "flush": true, 00:27:53.762 "reset": true, 00:27:53.762 "nvme_admin": true, 00:27:53.762 "nvme_io": true, 00:27:53.762 "nvme_io_md": false, 00:27:53.762 "write_zeroes": true, 00:27:53.762 "zcopy": false, 00:27:53.762 "get_zone_info": false, 00:27:53.762 "zone_management": false, 00:27:53.762 "zone_append": false, 00:27:53.762 "compare": true, 00:27:53.762 "compare_and_write": true, 00:27:53.762 "abort": true, 00:27:53.762 "seek_hole": false, 00:27:53.762 "seek_data": false, 00:27:53.762 "copy": true, 00:27:53.762 "nvme_iov_md": false 00:27:53.762 }, 00:27:53.762 "memory_domains": [ 00:27:53.762 { 00:27:53.762 "dma_device_id": "system", 00:27:53.762 "dma_device_type": 1 00:27:53.762 } 00:27:53.762 ], 00:27:53.762 "driver_specific": { 00:27:53.762 "nvme": [ 00:27:53.762 { 00:27:53.762 "trid": { 00:27:53.762 "trtype": "TCP", 00:27:53.762 "adrfam": "IPv4", 00:27:53.762 "traddr": "10.0.0.2", 00:27:53.762 "trsvcid": "4420", 00:27:53.762 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:53.762 }, 00:27:53.762 "ctrlr_data": { 00:27:53.763 "cntlid": 2, 00:27:53.763 "vendor_id": "0x8086", 00:27:53.763 "model_number": "SPDK bdev Controller", 00:27:53.763 "serial_number": "00000000000000000000", 00:27:53.763 "firmware_revision": "25.01", 00:27:53.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.763 "oacs": { 00:27:53.763 "security": 0, 00:27:53.763 "format": 0, 00:27:53.763 "firmware": 0, 00:27:53.763 "ns_manage": 0 00:27:53.763 }, 00:27:53.763 "multi_ctrlr": true, 00:27:53.763 "ana_reporting": false 00:27:53.763 }, 00:27:53.763 "vs": { 00:27:53.763 "nvme_version": "1.3" 00:27:53.763 }, 00:27:53.763 "ns_data": { 00:27:53.763 "id": 1, 00:27:53.763 "can_share": true 00:27:53.763 } 00:27:53.763 } 00:27:53.763 ], 00:27:53.763 "mp_policy": "active_passive" 00:27:53.763 } 00:27:53.763 } 00:27:53.763 ] 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Qo9Vq3G0VC 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Qo9Vq3G0VC 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Qo9Vq3G0VC 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.763 [2024-11-06 10:19:57.242925] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:53.763 [2024-11-06 10:19:57.243038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.763 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:54.024 [2024-11-06 10:19:57.266996] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:54.024 nvme0n1 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:54.024 [ 00:27:54.024 { 00:27:54.024 "name": "nvme0n1", 00:27:54.024 "aliases": [ 00:27:54.024 "5f47c01d-dd2e-425b-89f4-fe5df3b07d49" 00:27:54.024 ], 00:27:54.024 "product_name": "NVMe disk", 00:27:54.024 "block_size": 512, 00:27:54.024 "num_blocks": 2097152, 00:27:54.024 "uuid": "5f47c01d-dd2e-425b-89f4-fe5df3b07d49", 00:27:54.024 "numa_id": 0, 00:27:54.024 "assigned_rate_limits": { 00:27:54.024 "rw_ios_per_sec": 0, 00:27:54.024 "rw_mbytes_per_sec": 0, 00:27:54.024 "r_mbytes_per_sec": 0, 00:27:54.024 "w_mbytes_per_sec": 0 00:27:54.024 }, 00:27:54.024 "claimed": false, 00:27:54.024 "zoned": false, 00:27:54.024 "supported_io_types": { 00:27:54.024 "read": true, 00:27:54.024 "write": true, 00:27:54.024 "unmap": false, 00:27:54.024 "flush": true, 00:27:54.024 "reset": true, 00:27:54.024 "nvme_admin": true, 00:27:54.024 "nvme_io": true, 00:27:54.024 "nvme_io_md": false, 00:27:54.024 "write_zeroes": true, 00:27:54.024 "zcopy": false, 00:27:54.024 "get_zone_info": false, 00:27:54.024 "zone_management": false, 00:27:54.024 "zone_append": false, 00:27:54.024 "compare": true, 00:27:54.024 "compare_and_write": true, 00:27:54.024 "abort": true, 00:27:54.024 "seek_hole": false, 00:27:54.024 "seek_data": false, 00:27:54.024 "copy": true, 00:27:54.024 "nvme_iov_md": false 00:27:54.024 }, 00:27:54.024 "memory_domains": [ 00:27:54.024 { 00:27:54.024 "dma_device_id": "system", 00:27:54.024 "dma_device_type": 1 00:27:54.024 } 00:27:54.024 ], 00:27:54.024 "driver_specific": { 00:27:54.024 "nvme": [ 00:27:54.024 { 00:27:54.024 "trid": { 00:27:54.024 "trtype": "TCP", 00:27:54.024 "adrfam": "IPv4", 00:27:54.024 "traddr": "10.0.0.2", 00:27:54.024 "trsvcid": "4421", 00:27:54.024 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:54.024 }, 00:27:54.024 "ctrlr_data": { 00:27:54.024 "cntlid": 3, 00:27:54.024 "vendor_id": "0x8086", 00:27:54.024 "model_number": "SPDK bdev Controller", 00:27:54.024 "serial_number": "00000000000000000000", 00:27:54.024 "firmware_revision": "25.01", 00:27:54.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:54.024 "oacs": { 00:27:54.024 "security": 0, 00:27:54.024 "format": 0, 00:27:54.024 "firmware": 0, 00:27:54.024 "ns_manage": 0 00:27:54.024 }, 00:27:54.024 "multi_ctrlr": true, 00:27:54.024 "ana_reporting": false 00:27:54.024 }, 00:27:54.024 "vs": { 00:27:54.024 "nvme_version": "1.3" 00:27:54.024 }, 00:27:54.024 "ns_data": { 00:27:54.024 "id": 1, 00:27:54.024 "can_share": true 00:27:54.024 } 00:27:54.024 } 00:27:54.024 ], 00:27:54.024 "mp_policy": "active_passive" 00:27:54.024 } 00:27:54.024 } 00:27:54.024 ] 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Qo9Vq3G0VC 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:54.024 rmmod nvme_tcp 00:27:54.024 rmmod nvme_fabrics 00:27:54.024 rmmod nvme_keyring 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3990701 ']' 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3990701 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3990701 ']' 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3990701 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:54.024 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3990701 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3990701' 00:27:54.285 killing process with pid 3990701 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3990701 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3990701 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.285 10:19:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.828 10:19:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:56.828 00:27:56.828 real 0m12.595s 00:27:56.828 user 0m4.470s 00:27:56.828 sys 0m6.641s 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.829 ************************************ 00:27:56.829 END TEST nvmf_async_init 00:27:56.829 ************************************ 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.829 ************************************ 00:27:56.829 START TEST dma 00:27:56.829 ************************************ 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:56.829 * Looking for test storage... 00:27:56.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:56.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.829 --rc genhtml_branch_coverage=1 00:27:56.829 --rc genhtml_function_coverage=1 00:27:56.829 --rc genhtml_legend=1 00:27:56.829 --rc geninfo_all_blocks=1 00:27:56.829 --rc geninfo_unexecuted_blocks=1 00:27:56.829 00:27:56.829 ' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:56.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.829 --rc genhtml_branch_coverage=1 00:27:56.829 --rc genhtml_function_coverage=1 00:27:56.829 --rc genhtml_legend=1 00:27:56.829 --rc geninfo_all_blocks=1 00:27:56.829 --rc geninfo_unexecuted_blocks=1 00:27:56.829 00:27:56.829 ' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:56.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.829 --rc genhtml_branch_coverage=1 00:27:56.829 --rc genhtml_function_coverage=1 00:27:56.829 --rc genhtml_legend=1 00:27:56.829 --rc geninfo_all_blocks=1 00:27:56.829 --rc geninfo_unexecuted_blocks=1 00:27:56.829 00:27:56.829 ' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:56.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.829 --rc genhtml_branch_coverage=1 00:27:56.829 --rc genhtml_function_coverage=1 00:27:56.829 --rc genhtml_legend=1 00:27:56.829 --rc geninfo_all_blocks=1 00:27:56.829 --rc geninfo_unexecuted_blocks=1 00:27:56.829 00:27:56.829 ' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:56.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:56.829 10:19:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:56.829 00:27:56.830 real 0m0.187s 00:27:56.830 user 0m0.105s 00:27:56.830 sys 0m0.089s 00:27:56.830 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:56.830 10:19:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:56.830 ************************************ 00:27:56.830 END TEST dma 00:27:56.830 ************************************ 00:27:56.830 10:19:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:56.830 10:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:56.830 10:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:56.830 10:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.830 ************************************ 00:27:56.830 START TEST nvmf_identify 00:27:56.830 ************************************ 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:56.830 * Looking for test storage... 00:27:56.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.830 --rc genhtml_branch_coverage=1 00:27:56.830 --rc genhtml_function_coverage=1 00:27:56.830 --rc genhtml_legend=1 00:27:56.830 --rc geninfo_all_blocks=1 00:27:56.830 --rc geninfo_unexecuted_blocks=1 00:27:56.830 00:27:56.830 ' 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.830 --rc genhtml_branch_coverage=1 00:27:56.830 --rc genhtml_function_coverage=1 00:27:56.830 --rc genhtml_legend=1 00:27:56.830 --rc geninfo_all_blocks=1 00:27:56.830 --rc geninfo_unexecuted_blocks=1 00:27:56.830 00:27:56.830 ' 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.830 --rc genhtml_branch_coverage=1 00:27:56.830 --rc genhtml_function_coverage=1 00:27:56.830 --rc genhtml_legend=1 00:27:56.830 --rc geninfo_all_blocks=1 00:27:56.830 --rc geninfo_unexecuted_blocks=1 00:27:56.830 00:27:56.830 ' 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.830 --rc genhtml_branch_coverage=1 00:27:56.830 --rc genhtml_function_coverage=1 00:27:56.830 --rc genhtml_legend=1 00:27:56.830 --rc geninfo_all_blocks=1 00:27:56.830 --rc geninfo_unexecuted_blocks=1 00:27:56.830 00:27:56.830 ' 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:56.830 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:56.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.831 10:20:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:04.973 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:04.973 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:04.973 Found net devices under 0000:31:00.0: cvl_0_0 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:04.973 Found net devices under 0000:31:00.1: cvl_0_1 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.973 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:05.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:28:05.234 00:28:05.234 --- 10.0.0.2 ping statistics --- 00:28:05.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.234 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:28:05.234 00:28:05.234 --- 10.0.0.1 ping statistics --- 00:28:05.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.234 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:05.234 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3995889 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3995889 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3995889 ']' 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:05.494 10:20:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.494 [2024-11-06 10:20:08.791220] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:05.494 [2024-11-06 10:20:08.791272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.494 [2024-11-06 10:20:08.875111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.494 [2024-11-06 10:20:08.912209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.494 [2024-11-06 10:20:08.912238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.494 [2024-11-06 10:20:08.912246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.494 [2024-11-06 10:20:08.912253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.495 [2024-11-06 10:20:08.912259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.495 [2024-11-06 10:20:08.913801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.495 [2024-11-06 10:20:08.913932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.495 [2024-11-06 10:20:08.913990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.495 [2024-11-06 10:20:08.913989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:06.438 [2024-11-06 10:20:09.607871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:06.438 Malloc0 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:06.438 [2024-11-06 10:20:09.719223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:06.438 [ 00:28:06.438 { 00:28:06.438 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:06.438 "subtype": "Discovery", 00:28:06.438 "listen_addresses": [ 00:28:06.438 { 00:28:06.438 "trtype": "TCP", 00:28:06.438 "adrfam": "IPv4", 00:28:06.438 "traddr": "10.0.0.2", 00:28:06.438 "trsvcid": "4420" 00:28:06.438 } 00:28:06.438 ], 00:28:06.438 "allow_any_host": true, 00:28:06.438 "hosts": [] 00:28:06.438 }, 00:28:06.438 { 00:28:06.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.438 "subtype": "NVMe", 00:28:06.438 "listen_addresses": [ 00:28:06.438 { 00:28:06.438 "trtype": "TCP", 00:28:06.438 "adrfam": "IPv4", 00:28:06.438 "traddr": "10.0.0.2", 00:28:06.438 "trsvcid": "4420" 00:28:06.438 } 00:28:06.438 ], 00:28:06.438 "allow_any_host": true, 00:28:06.438 "hosts": [], 00:28:06.438 "serial_number": "SPDK00000000000001", 00:28:06.438 "model_number": "SPDK bdev Controller", 00:28:06.438 "max_namespaces": 32, 00:28:06.438 "min_cntlid": 1, 00:28:06.438 "max_cntlid": 65519, 00:28:06.438 "namespaces": [ 00:28:06.438 { 00:28:06.438 "nsid": 1, 00:28:06.438 "bdev_name": "Malloc0", 00:28:06.438 "name": "Malloc0", 00:28:06.438 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:06.438 "eui64": "ABCDEF0123456789", 00:28:06.438 "uuid": "9c0584ab-ff94-42e9-9688-55f257768634" 00:28:06.438 } 00:28:06.438 ] 00:28:06.438 } 00:28:06.438 ] 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.438 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:06.438 [2024-11-06 10:20:09.780970] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:06.438 [2024-11-06 10:20:09.781018] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995949 ] 00:28:06.438 [2024-11-06 10:20:09.835106] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:28:06.438 [2024-11-06 10:20:09.835156] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:06.438 [2024-11-06 10:20:09.835162] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:06.438 [2024-11-06 10:20:09.835173] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:06.438 [2024-11-06 10:20:09.835183] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:06.438 [2024-11-06 10:20:09.839196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:28:06.438 [2024-11-06 10:20:09.839235] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ece550 0 00:28:06.438 [2024-11-06 10:20:09.846877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:06.438 [2024-11-06 10:20:09.846890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:06.438 [2024-11-06 10:20:09.846895] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:06.438 [2024-11-06 10:20:09.846899] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:06.438 [2024-11-06 10:20:09.846931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.438 [2024-11-06 10:20:09.846938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.438 [2024-11-06 10:20:09.846942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ece550) 00:28:06.438 [2024-11-06 10:20:09.846957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:06.438 [2024-11-06 10:20:09.846978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30100, cid 0, qid 0 00:28:06.438 [2024-11-06 10:20:09.854873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.438 [2024-11-06 10:20:09.854883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.438 [2024-11-06 10:20:09.854887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.438 [2024-11-06 10:20:09.854891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30100) on tqpair=0x1ece550 00:28:06.438 [2024-11-06 10:20:09.854901] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:06.439 [2024-11-06 10:20:09.854909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:28:06.439 [2024-11-06 10:20:09.854914] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:28:06.439 [2024-11-06 10:20:09.854927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.854931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.854935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ece550) 00:28:06.439 [2024-11-06 10:20:09.854942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.439 [2024-11-06 10:20:09.854956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30100, cid 0, qid 0 00:28:06.439 [2024-11-06 10:20:09.855170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.439 [2024-11-06 10:20:09.855176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.439 [2024-11-06 10:20:09.855180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.855184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30100) on tqpair=0x1ece550 00:28:06.439 [2024-11-06 10:20:09.855189] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:28:06.439 [2024-11-06 10:20:09.855197] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:28:06.439 [2024-11-06 10:20:09.855204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.855208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.855211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ece550) 00:28:06.439 [2024-11-06 10:20:09.855218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.439 [2024-11-06 10:20:09.855228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30100, cid 0, qid 0 00:28:06.439 [2024-11-06 10:20:09.855474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.439 [2024-11-06 10:20:09.855480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.439 [2024-11-06 10:20:09.855484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.855487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30100) on tqpair=0x1ece550 00:28:06.439 [2024-11-06 10:20:09.855493] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:28:06.439 [2024-11-06 10:20:09.855501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:06.439 [2024-11-06 10:20:09.855507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.855511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.855514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ece550) 00:28:06.439 [2024-11-06 10:20:09.855521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.439 [2024-11-06 10:20:09.855534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30100, cid 0, qid 0 00:28:06.439 [2024-11-06 10:20:09.855726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.439 [2024-11-06 10:20:09.855732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.439 [2024-11-06 10:20:09.855735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.855739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30100) on tqpair=0x1ece550 00:28:06.439 [2024-11-06 10:20:09.855744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:06.439 [2024-11-06 10:20:09.855753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.855757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.855761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ece550) 00:28:06.439 [2024-11-06 10:20:09.855768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.439 [2024-11-06 10:20:09.855777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30100, cid 0, qid 0 00:28:06.439 [2024-11-06 10:20:09.856029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.439 [2024-11-06 10:20:09.856036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.439 [2024-11-06 10:20:09.856039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.856043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30100) on tqpair=0x1ece550 00:28:06.439 [2024-11-06 10:20:09.856048] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:06.439 [2024-11-06 10:20:09.856053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:06.439 [2024-11-06 10:20:09.856061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:06.439 [2024-11-06 10:20:09.856169] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:28:06.439 [2024-11-06 10:20:09.856174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:06.439 [2024-11-06 10:20:09.856183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.856187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.856190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ece550) 00:28:06.439 [2024-11-06 10:20:09.856197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.439 [2024-11-06 10:20:09.856208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30100, cid 0, qid 0 00:28:06.439 [2024-11-06 10:20:09.856408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.439 [2024-11-06 10:20:09.856415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.439 [2024-11-06 10:20:09.856418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.856422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30100) on tqpair=0x1ece550 00:28:06.439 [2024-11-06 10:20:09.856427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:06.439 [2024-11-06 10:20:09.856436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.856440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.856443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ece550) 00:28:06.439 [2024-11-06 10:20:09.856454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.439 [2024-11-06 10:20:09.856464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30100, cid 0, qid 0 00:28:06.439 [2024-11-06 10:20:09.856677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.439 [2024-11-06 10:20:09.856684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.439 [2024-11-06 10:20:09.856687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.856691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30100) on tqpair=0x1ece550 00:28:06.439 [2024-11-06 10:20:09.856696] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:06.439 [2024-11-06 10:20:09.856701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:06.439 [2024-11-06 10:20:09.856708] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:28:06.439 [2024-11-06 10:20:09.856721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:06.439 [2024-11-06 10:20:09.856730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.856734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ece550) 00:28:06.439 [2024-11-06 10:20:09.856741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.439 [2024-11-06 10:20:09.856751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30100, cid 0, qid 0 00:28:06.439 [2024-11-06 10:20:09.856988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.439 [2024-11-06 10:20:09.856995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.439 [2024-11-06 10:20:09.856999] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.857004] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ece550): datao=0, datal=4096, cccid=0 00:28:06.439 [2024-11-06 10:20:09.857008] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f30100) on tqpair(0x1ece550): expected_datao=0, payload_size=4096 00:28:06.439 [2024-11-06 10:20:09.857013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.857021] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.857025] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.857181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.439 [2024-11-06 10:20:09.857187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.439 [2024-11-06 10:20:09.857191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.857195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30100) on tqpair=0x1ece550 00:28:06.439 [2024-11-06 10:20:09.857203] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:28:06.439 [2024-11-06 10:20:09.857208] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:28:06.439 [2024-11-06 10:20:09.857212] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:28:06.439 [2024-11-06 10:20:09.857220] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:28:06.439 [2024-11-06 10:20:09.857225] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:28:06.439 [2024-11-06 10:20:09.857230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:28:06.439 [2024-11-06 10:20:09.857242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:06.439 [2024-11-06 10:20:09.857249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.439 [2024-11-06 10:20:09.857253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ece550) 00:28:06.440 [2024-11-06 10:20:09.857264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:06.440 [2024-11-06 10:20:09.857275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30100, cid 0, qid 0 00:28:06.440 [2024-11-06 10:20:09.857484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.440 [2024-11-06 10:20:09.857490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.440 [2024-11-06 10:20:09.857494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30100) on tqpair=0x1ece550 00:28:06.440 [2024-11-06 10:20:09.857506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ece550) 00:28:06.440 [2024-11-06 10:20:09.857520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.440 [2024-11-06 10:20:09.857526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ece550) 00:28:06.440 [2024-11-06 10:20:09.857539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.440 [2024-11-06 10:20:09.857545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ece550) 00:28:06.440 [2024-11-06 10:20:09.857558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.440 [2024-11-06 10:20:09.857565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.440 [2024-11-06 10:20:09.857578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.440 [2024-11-06 10:20:09.857582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:06.440 [2024-11-06 10:20:09.857590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:06.440 [2024-11-06 10:20:09.857597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ece550) 00:28:06.440 [2024-11-06 10:20:09.857608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.440 [2024-11-06 10:20:09.857619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30100, cid 0, qid 0 00:28:06.440 [2024-11-06 10:20:09.857626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30280, cid 1, qid 0 00:28:06.440 [2024-11-06 10:20:09.857631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30400, cid 2, qid 0 00:28:06.440 [2024-11-06 10:20:09.857636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.440 [2024-11-06 10:20:09.857641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30700, cid 4, qid 0 00:28:06.440 [2024-11-06 10:20:09.857908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.440 [2024-11-06 10:20:09.857915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.440 [2024-11-06 10:20:09.857919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30700) on tqpair=0x1ece550 00:28:06.440 [2024-11-06 10:20:09.857930] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:28:06.440 [2024-11-06 10:20:09.857935] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:28:06.440 [2024-11-06 10:20:09.857946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.857950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ece550) 00:28:06.440 [2024-11-06 10:20:09.857956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.440 [2024-11-06 10:20:09.857967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30700, cid 4, qid 0 00:28:06.440 [2024-11-06 10:20:09.858179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.440 [2024-11-06 10:20:09.858186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.440 [2024-11-06 10:20:09.858189] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858193] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ece550): datao=0, datal=4096, cccid=4 00:28:06.440 [2024-11-06 10:20:09.858197] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f30700) on tqpair(0x1ece550): expected_datao=0, payload_size=4096 00:28:06.440 [2024-11-06 10:20:09.858202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858239] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858243] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.440 [2024-11-06 10:20:09.858416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.440 [2024-11-06 10:20:09.858419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30700) on tqpair=0x1ece550 00:28:06.440 [2024-11-06 10:20:09.858435] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:28:06.440 [2024-11-06 10:20:09.858457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ece550) 00:28:06.440 [2024-11-06 10:20:09.858468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.440 [2024-11-06 10:20:09.858475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ece550) 00:28:06.440 [2024-11-06 10:20:09.858488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.440 [2024-11-06 10:20:09.858502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30700, cid 4, qid 0 00:28:06.440 [2024-11-06 10:20:09.858509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30880, cid 5, qid 0 00:28:06.440 [2024-11-06 10:20:09.858740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.440 [2024-11-06 10:20:09.858747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.440 [2024-11-06 10:20:09.858751] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858755] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ece550): datao=0, datal=1024, cccid=4 00:28:06.440 [2024-11-06 10:20:09.858759] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f30700) on tqpair(0x1ece550): expected_datao=0, payload_size=1024 00:28:06.440 [2024-11-06 10:20:09.858763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858770] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858774] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.440 [2024-11-06 10:20:09.858785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.440 [2024-11-06 10:20:09.858789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.858793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30880) on tqpair=0x1ece550 00:28:06.440 [2024-11-06 10:20:09.902873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.440 [2024-11-06 10:20:09.902883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.440 [2024-11-06 10:20:09.902886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.902890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30700) on tqpair=0x1ece550 00:28:06.440 [2024-11-06 10:20:09.902902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.902906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ece550) 00:28:06.440 [2024-11-06 10:20:09.902913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.440 [2024-11-06 10:20:09.902929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30700, cid 4, qid 0 00:28:06.440 [2024-11-06 10:20:09.903113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.440 [2024-11-06 10:20:09.903119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.440 [2024-11-06 10:20:09.903123] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.903127] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ece550): datao=0, datal=3072, cccid=4 00:28:06.440 [2024-11-06 10:20:09.903131] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f30700) on tqpair(0x1ece550): expected_datao=0, payload_size=3072 00:28:06.440 [2024-11-06 10:20:09.903136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.903142] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.903146] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.903332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.440 [2024-11-06 10:20:09.903338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.440 [2024-11-06 10:20:09.903342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.903346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30700) on tqpair=0x1ece550 00:28:06.440 [2024-11-06 10:20:09.903354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.440 [2024-11-06 10:20:09.903358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ece550) 00:28:06.440 [2024-11-06 10:20:09.903364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.440 [2024-11-06 10:20:09.903380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30700, cid 4, qid 0 00:28:06.440 [2024-11-06 10:20:09.903635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.440 [2024-11-06 10:20:09.903641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.441 [2024-11-06 10:20:09.903645] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.441 [2024-11-06 10:20:09.903649] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ece550): datao=0, datal=8, cccid=4 00:28:06.441 [2024-11-06 10:20:09.903653] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f30700) on tqpair(0x1ece550): expected_datao=0, payload_size=8 00:28:06.441 [2024-11-06 10:20:09.903657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.441 [2024-11-06 10:20:09.903664] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.441 [2024-11-06 10:20:09.903668] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.705 [2024-11-06 10:20:09.944043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.705 [2024-11-06 10:20:09.944054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.705 [2024-11-06 10:20:09.944058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.705 [2024-11-06 10:20:09.944062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30700) on tqpair=0x1ece550 00:28:06.705 ===================================================== 00:28:06.705 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:06.705 ===================================================== 00:28:06.705 Controller Capabilities/Features 00:28:06.705 ================================ 00:28:06.705 Vendor ID: 0000 00:28:06.705 Subsystem Vendor ID: 0000 00:28:06.705 Serial Number: .................... 00:28:06.705 Model Number: ........................................ 00:28:06.705 Firmware Version: 25.01 00:28:06.705 Recommended Arb Burst: 0 00:28:06.705 IEEE OUI Identifier: 00 00 00 00:28:06.705 Multi-path I/O 00:28:06.705 May have multiple subsystem ports: No 00:28:06.705 May have multiple controllers: No 00:28:06.705 Associated with SR-IOV VF: No 00:28:06.705 Max Data Transfer Size: 131072 00:28:06.705 Max Number of Namespaces: 0 00:28:06.705 Max Number of I/O Queues: 1024 00:28:06.705 NVMe Specification Version (VS): 1.3 00:28:06.705 NVMe Specification Version (Identify): 1.3 00:28:06.705 Maximum Queue Entries: 128 00:28:06.705 Contiguous Queues Required: Yes 00:28:06.706 Arbitration Mechanisms Supported 00:28:06.706 Weighted Round Robin: Not Supported 00:28:06.706 Vendor Specific: Not Supported 00:28:06.706 Reset Timeout: 15000 ms 00:28:06.706 Doorbell Stride: 4 bytes 00:28:06.706 NVM Subsystem Reset: Not Supported 00:28:06.706 Command Sets Supported 00:28:06.706 NVM Command Set: Supported 00:28:06.706 Boot Partition: Not Supported 00:28:06.706 Memory Page Size Minimum: 4096 bytes 00:28:06.706 Memory Page Size Maximum: 4096 bytes 00:28:06.706 Persistent Memory Region: Not Supported 00:28:06.706 Optional Asynchronous Events Supported 00:28:06.706 Namespace Attribute Notices: Not Supported 00:28:06.706 Firmware Activation Notices: Not Supported 00:28:06.706 ANA Change Notices: Not Supported 00:28:06.706 PLE Aggregate Log Change Notices: Not Supported 00:28:06.706 LBA Status Info Alert Notices: Not Supported 00:28:06.706 EGE Aggregate Log Change Notices: Not Supported 00:28:06.706 Normal NVM Subsystem Shutdown event: Not Supported 00:28:06.706 Zone Descriptor Change Notices: Not Supported 00:28:06.706 Discovery Log Change Notices: Supported 00:28:06.706 Controller Attributes 00:28:06.706 128-bit Host Identifier: Not Supported 00:28:06.706 Non-Operational Permissive Mode: Not Supported 00:28:06.706 NVM Sets: Not Supported 00:28:06.706 Read Recovery Levels: Not Supported 00:28:06.706 Endurance Groups: Not Supported 00:28:06.706 Predictable Latency Mode: Not Supported 00:28:06.706 Traffic Based Keep ALive: Not Supported 00:28:06.706 Namespace Granularity: Not Supported 00:28:06.706 SQ Associations: Not Supported 00:28:06.706 UUID List: Not Supported 00:28:06.706 Multi-Domain Subsystem: Not Supported 00:28:06.706 Fixed Capacity Management: Not Supported 00:28:06.706 Variable Capacity Management: Not Supported 00:28:06.706 Delete Endurance Group: Not Supported 00:28:06.706 Delete NVM Set: Not Supported 00:28:06.706 Extended LBA Formats Supported: Not Supported 00:28:06.706 Flexible Data Placement Supported: Not Supported 00:28:06.706 00:28:06.706 Controller Memory Buffer Support 00:28:06.706 ================================ 00:28:06.706 Supported: No 00:28:06.706 00:28:06.706 Persistent Memory Region Support 00:28:06.706 ================================ 00:28:06.706 Supported: No 00:28:06.706 00:28:06.706 Admin Command Set Attributes 00:28:06.706 ============================ 00:28:06.706 Security Send/Receive: Not Supported 00:28:06.706 Format NVM: Not Supported 00:28:06.706 Firmware Activate/Download: Not Supported 00:28:06.706 Namespace Management: Not Supported 00:28:06.706 Device Self-Test: Not Supported 00:28:06.706 Directives: Not Supported 00:28:06.706 NVMe-MI: Not Supported 00:28:06.706 Virtualization Management: Not Supported 00:28:06.706 Doorbell Buffer Config: Not Supported 00:28:06.706 Get LBA Status Capability: Not Supported 00:28:06.706 Command & Feature Lockdown Capability: Not Supported 00:28:06.706 Abort Command Limit: 1 00:28:06.706 Async Event Request Limit: 4 00:28:06.706 Number of Firmware Slots: N/A 00:28:06.706 Firmware Slot 1 Read-Only: N/A 00:28:06.706 Firmware Activation Without Reset: N/A 00:28:06.706 Multiple Update Detection Support: N/A 00:28:06.706 Firmware Update Granularity: No Information Provided 00:28:06.706 Per-Namespace SMART Log: No 00:28:06.706 Asymmetric Namespace Access Log Page: Not Supported 00:28:06.706 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:06.706 Command Effects Log Page: Not Supported 00:28:06.706 Get Log Page Extended Data: Supported 00:28:06.706 Telemetry Log Pages: Not Supported 00:28:06.706 Persistent Event Log Pages: Not Supported 00:28:06.706 Supported Log Pages Log Page: May Support 00:28:06.706 Commands Supported & Effects Log Page: Not Supported 00:28:06.706 Feature Identifiers & Effects Log Page:May Support 00:28:06.706 NVMe-MI Commands & Effects Log Page: May Support 00:28:06.706 Data Area 4 for Telemetry Log: Not Supported 00:28:06.706 Error Log Page Entries Supported: 128 00:28:06.706 Keep Alive: Not Supported 00:28:06.706 00:28:06.706 NVM Command Set Attributes 00:28:06.706 ========================== 00:28:06.706 Submission Queue Entry Size 00:28:06.706 Max: 1 00:28:06.706 Min: 1 00:28:06.706 Completion Queue Entry Size 00:28:06.706 Max: 1 00:28:06.706 Min: 1 00:28:06.706 Number of Namespaces: 0 00:28:06.706 Compare Command: Not Supported 00:28:06.706 Write Uncorrectable Command: Not Supported 00:28:06.706 Dataset Management Command: Not Supported 00:28:06.706 Write Zeroes Command: Not Supported 00:28:06.706 Set Features Save Field: Not Supported 00:28:06.706 Reservations: Not Supported 00:28:06.706 Timestamp: Not Supported 00:28:06.706 Copy: Not Supported 00:28:06.706 Volatile Write Cache: Not Present 00:28:06.706 Atomic Write Unit (Normal): 1 00:28:06.706 Atomic Write Unit (PFail): 1 00:28:06.706 Atomic Compare & Write Unit: 1 00:28:06.706 Fused Compare & Write: Supported 00:28:06.706 Scatter-Gather List 00:28:06.706 SGL Command Set: Supported 00:28:06.706 SGL Keyed: Supported 00:28:06.706 SGL Bit Bucket Descriptor: Not Supported 00:28:06.706 SGL Metadata Pointer: Not Supported 00:28:06.706 Oversized SGL: Not Supported 00:28:06.706 SGL Metadata Address: Not Supported 00:28:06.706 SGL Offset: Supported 00:28:06.706 Transport SGL Data Block: Not Supported 00:28:06.706 Replay Protected Memory Block: Not Supported 00:28:06.706 00:28:06.706 Firmware Slot Information 00:28:06.706 ========================= 00:28:06.706 Active slot: 0 00:28:06.706 00:28:06.706 00:28:06.706 Error Log 00:28:06.706 ========= 00:28:06.706 00:28:06.706 Active Namespaces 00:28:06.706 ================= 00:28:06.706 Discovery Log Page 00:28:06.706 ================== 00:28:06.706 Generation Counter: 2 00:28:06.706 Number of Records: 2 00:28:06.706 Record Format: 0 00:28:06.706 00:28:06.706 Discovery Log Entry 0 00:28:06.706 ---------------------- 00:28:06.706 Transport Type: 3 (TCP) 00:28:06.706 Address Family: 1 (IPv4) 00:28:06.706 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:06.706 Entry Flags: 00:28:06.706 Duplicate Returned Information: 1 00:28:06.706 Explicit Persistent Connection Support for Discovery: 1 00:28:06.706 Transport Requirements: 00:28:06.706 Secure Channel: Not Required 00:28:06.706 Port ID: 0 (0x0000) 00:28:06.706 Controller ID: 65535 (0xffff) 00:28:06.706 Admin Max SQ Size: 128 00:28:06.706 Transport Service Identifier: 4420 00:28:06.706 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:06.706 Transport Address: 10.0.0.2 00:28:06.706 Discovery Log Entry 1 00:28:06.706 ---------------------- 00:28:06.706 Transport Type: 3 (TCP) 00:28:06.706 Address Family: 1 (IPv4) 00:28:06.706 Subsystem Type: 2 (NVM Subsystem) 00:28:06.706 Entry Flags: 00:28:06.706 Duplicate Returned Information: 0 00:28:06.706 Explicit Persistent Connection Support for Discovery: 0 00:28:06.706 Transport Requirements: 00:28:06.706 Secure Channel: Not Required 00:28:06.706 Port ID: 0 (0x0000) 00:28:06.706 Controller ID: 65535 (0xffff) 00:28:06.706 Admin Max SQ Size: 128 00:28:06.706 Transport Service Identifier: 4420 00:28:06.706 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:06.706 Transport Address: 10.0.0.2 [2024-11-06 10:20:09.944154] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:28:06.706 [2024-11-06 10:20:09.944165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30100) on tqpair=0x1ece550 00:28:06.706 [2024-11-06 10:20:09.944172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.706 [2024-11-06 10:20:09.944178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30280) on tqpair=0x1ece550 00:28:06.706 [2024-11-06 10:20:09.944183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.706 [2024-11-06 10:20:09.944188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30400) on tqpair=0x1ece550 00:28:06.706 [2024-11-06 10:20:09.944192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.706 [2024-11-06 10:20:09.944198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.706 [2024-11-06 10:20:09.944202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.706 [2024-11-06 10:20:09.944214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.706 [2024-11-06 10:20:09.944218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.706 [2024-11-06 10:20:09.944222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.706 [2024-11-06 10:20:09.944229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.706 [2024-11-06 10:20:09.944242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.706 [2024-11-06 10:20:09.944336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.706 [2024-11-06 10:20:09.944343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.707 [2024-11-06 10:20:09.944346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.944350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.707 [2024-11-06 10:20:09.944357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.944361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.944364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.707 [2024-11-06 10:20:09.944371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.707 [2024-11-06 10:20:09.944386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.707 [2024-11-06 10:20:09.944612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.707 [2024-11-06 10:20:09.944619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.707 [2024-11-06 10:20:09.944623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.944627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.707 [2024-11-06 10:20:09.944632] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:28:06.707 [2024-11-06 10:20:09.944636] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:28:06.707 [2024-11-06 10:20:09.944645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.944649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.944653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.707 [2024-11-06 10:20:09.944660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.707 [2024-11-06 10:20:09.944669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.707 [2024-11-06 10:20:09.944914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.707 [2024-11-06 10:20:09.944921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.707 [2024-11-06 10:20:09.944924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.944928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.707 [2024-11-06 10:20:09.944938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.944942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.944946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.707 [2024-11-06 10:20:09.944952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.707 [2024-11-06 10:20:09.944963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.707 [2024-11-06 10:20:09.945165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.707 [2024-11-06 10:20:09.945172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.707 [2024-11-06 10:20:09.945175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.707 [2024-11-06 10:20:09.945188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.707 [2024-11-06 10:20:09.945202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.707 [2024-11-06 10:20:09.945212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.707 [2024-11-06 10:20:09.945391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.707 [2024-11-06 10:20:09.945397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.707 [2024-11-06 10:20:09.945400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.707 [2024-11-06 10:20:09.945414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.707 [2024-11-06 10:20:09.945430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.707 [2024-11-06 10:20:09.945440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.707 [2024-11-06 10:20:09.945669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.707 [2024-11-06 10:20:09.945675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.707 [2024-11-06 10:20:09.945678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.707 [2024-11-06 10:20:09.945691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.707 [2024-11-06 10:20:09.945705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.707 [2024-11-06 10:20:09.945715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.707 [2024-11-06 10:20:09.945922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.707 [2024-11-06 10:20:09.945928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.707 [2024-11-06 10:20:09.945932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.707 [2024-11-06 10:20:09.945945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.945952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.707 [2024-11-06 10:20:09.945959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.707 [2024-11-06 10:20:09.945969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.707 [2024-11-06 10:20:09.946222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.707 [2024-11-06 10:20:09.946229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.707 [2024-11-06 10:20:09.946232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.946236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.707 [2024-11-06 10:20:09.946245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.946249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.946253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.707 [2024-11-06 10:20:09.946259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.707 [2024-11-06 10:20:09.946269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.707 [2024-11-06 10:20:09.946432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.707 [2024-11-06 10:20:09.946438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.707 [2024-11-06 10:20:09.946442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.946446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.707 [2024-11-06 10:20:09.946455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.946459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.946463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.707 [2024-11-06 10:20:09.946469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.707 [2024-11-06 10:20:09.946481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.707 [2024-11-06 10:20:09.946675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.707 [2024-11-06 10:20:09.946682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.707 [2024-11-06 10:20:09.946685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.946689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.707 [2024-11-06 10:20:09.946698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.946702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.946706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.707 [2024-11-06 10:20:09.946712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.707 [2024-11-06 10:20:09.946722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.707 [2024-11-06 10:20:09.950870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.707 [2024-11-06 10:20:09.950879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.707 [2024-11-06 10:20:09.950882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.707 [2024-11-06 10:20:09.950886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.707 [2024-11-06 10:20:09.950896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:09.950900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:09.950903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ece550) 00:28:06.708 [2024-11-06 10:20:09.950910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.708 [2024-11-06 10:20:09.950922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30580, cid 3, qid 0 00:28:06.708 [2024-11-06 10:20:09.951105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.708 [2024-11-06 10:20:09.951111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.708 [2024-11-06 10:20:09.951114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:09.951118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f30580) on tqpair=0x1ece550 00:28:06.708 [2024-11-06 10:20:09.951126] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:28:06.708 00:28:06.708 10:20:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:06.708 [2024-11-06 10:20:09.994223] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:06.708 [2024-11-06 10:20:09.994268] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996052 ] 00:28:06.708 [2024-11-06 10:20:10.049186] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:28:06.708 [2024-11-06 10:20:10.049241] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:06.708 [2024-11-06 10:20:10.049246] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:06.708 [2024-11-06 10:20:10.049263] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:06.708 [2024-11-06 10:20:10.049273] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:06.708 [2024-11-06 10:20:10.053193] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:28:06.708 [2024-11-06 10:20:10.053226] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe5e550 0 00:28:06.708 [2024-11-06 10:20:10.060876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:06.708 [2024-11-06 10:20:10.060891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:06.708 [2024-11-06 10:20:10.060897] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:06.708 [2024-11-06 10:20:10.060901] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:06.708 [2024-11-06 10:20:10.060932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.060939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.060944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe5e550) 00:28:06.708 [2024-11-06 10:20:10.060956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:06.708 [2024-11-06 10:20:10.060975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0100, cid 0, qid 0 00:28:06.708 [2024-11-06 10:20:10.068875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.708 [2024-11-06 10:20:10.068886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.708 [2024-11-06 10:20:10.068889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.068894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0100) on tqpair=0xe5e550 00:28:06.708 [2024-11-06 10:20:10.068905] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:06.708 [2024-11-06 10:20:10.068912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:28:06.708 [2024-11-06 10:20:10.068917] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:28:06.708 [2024-11-06 10:20:10.068930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.068934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.068938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe5e550) 00:28:06.708 [2024-11-06 10:20:10.068946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.708 [2024-11-06 10:20:10.068960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0100, cid 0, qid 0 00:28:06.708 [2024-11-06 10:20:10.069130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.708 [2024-11-06 10:20:10.069137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.708 [2024-11-06 10:20:10.069140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.069144] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0100) on tqpair=0xe5e550 00:28:06.708 [2024-11-06 10:20:10.069149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:28:06.708 [2024-11-06 10:20:10.069157] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:28:06.708 [2024-11-06 10:20:10.069164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.069168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.069172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe5e550) 00:28:06.708 [2024-11-06 10:20:10.069178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.708 [2024-11-06 10:20:10.069192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0100, cid 0, qid 0 00:28:06.708 [2024-11-06 10:20:10.069357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.708 [2024-11-06 10:20:10.069363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.708 [2024-11-06 10:20:10.069367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.069371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0100) on tqpair=0xe5e550 00:28:06.708 [2024-11-06 10:20:10.069376] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:28:06.708 [2024-11-06 10:20:10.069385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:06.708 [2024-11-06 10:20:10.069391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.069396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.069399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe5e550) 00:28:06.708 [2024-11-06 10:20:10.069406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.708 [2024-11-06 10:20:10.069416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0100, cid 0, qid 0 00:28:06.708 [2024-11-06 10:20:10.069621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.708 [2024-11-06 10:20:10.069627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.708 [2024-11-06 10:20:10.069631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.069634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0100) on tqpair=0xe5e550 00:28:06.708 [2024-11-06 10:20:10.069640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:06.708 [2024-11-06 10:20:10.069649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.708 [2024-11-06 10:20:10.069653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.069657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe5e550) 00:28:06.709 [2024-11-06 10:20:10.069663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.709 [2024-11-06 10:20:10.069674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0100, cid 0, qid 0 00:28:06.709 [2024-11-06 10:20:10.069892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.709 [2024-11-06 10:20:10.069899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.709 [2024-11-06 10:20:10.069902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.069906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0100) on tqpair=0xe5e550 00:28:06.709 [2024-11-06 10:20:10.069911] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:06.709 [2024-11-06 10:20:10.069916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:06.709 [2024-11-06 10:20:10.069924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:06.709 [2024-11-06 10:20:10.070033] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:28:06.709 [2024-11-06 10:20:10.070038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:06.709 [2024-11-06 10:20:10.070047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.070051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.070054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe5e550) 00:28:06.709 [2024-11-06 10:20:10.070063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.709 [2024-11-06 10:20:10.070074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0100, cid 0, qid 0 00:28:06.709 [2024-11-06 10:20:10.070245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.709 [2024-11-06 10:20:10.070251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.709 [2024-11-06 10:20:10.070255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.070259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0100) on tqpair=0xe5e550 00:28:06.709 [2024-11-06 10:20:10.070263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:06.709 [2024-11-06 10:20:10.070273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.070277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.070281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe5e550) 00:28:06.709 [2024-11-06 10:20:10.070288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.709 [2024-11-06 10:20:10.070298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0100, cid 0, qid 0 00:28:06.709 [2024-11-06 10:20:10.070480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.709 [2024-11-06 10:20:10.070487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.709 [2024-11-06 10:20:10.070490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.070494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0100) on tqpair=0xe5e550 00:28:06.709 [2024-11-06 10:20:10.070499] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:06.709 [2024-11-06 10:20:10.070503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:06.709 [2024-11-06 10:20:10.070511] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:28:06.709 [2024-11-06 10:20:10.070521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:06.709 [2024-11-06 10:20:10.070530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.070533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe5e550) 00:28:06.709 [2024-11-06 10:20:10.070540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.709 [2024-11-06 10:20:10.070551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0100, cid 0, qid 0 00:28:06.709 [2024-11-06 10:20:10.070785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.709 [2024-11-06 10:20:10.070792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.709 [2024-11-06 10:20:10.070795] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.070799] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe5e550): datao=0, datal=4096, cccid=0 00:28:06.709 [2024-11-06 10:20:10.070804] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xec0100) on tqpair(0xe5e550): expected_datao=0, payload_size=4096 00:28:06.709 [2024-11-06 10:20:10.070809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.070824] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.070829] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.111048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.709 [2024-11-06 10:20:10.111060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.709 [2024-11-06 10:20:10.111067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.111071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0100) on tqpair=0xe5e550 00:28:06.709 [2024-11-06 10:20:10.111080] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:28:06.709 [2024-11-06 10:20:10.111085] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:28:06.709 [2024-11-06 10:20:10.111090] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:28:06.709 [2024-11-06 10:20:10.111098] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:28:06.709 [2024-11-06 10:20:10.111103] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:28:06.709 [2024-11-06 10:20:10.111108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:28:06.709 [2024-11-06 10:20:10.111118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:06.709 [2024-11-06 10:20:10.111126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.111130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.111133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe5e550) 00:28:06.709 [2024-11-06 10:20:10.111141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:06.709 [2024-11-06 10:20:10.111154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0100, cid 0, qid 0 00:28:06.709 [2024-11-06 10:20:10.111377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.709 [2024-11-06 10:20:10.111383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.709 [2024-11-06 10:20:10.111387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.111391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0100) on tqpair=0xe5e550 00:28:06.709 [2024-11-06 10:20:10.111398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.111402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.709 [2024-11-06 10:20:10.111406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe5e550) 00:28:06.710 [2024-11-06 10:20:10.111412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.710 [2024-11-06 10:20:10.111418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.111422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.111425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe5e550) 00:28:06.710 [2024-11-06 10:20:10.111431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.710 [2024-11-06 10:20:10.111437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.111441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.111444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe5e550) 00:28:06.710 [2024-11-06 10:20:10.111450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.710 [2024-11-06 10:20:10.111456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.111460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.111463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe5e550) 00:28:06.710 [2024-11-06 10:20:10.111469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.710 [2024-11-06 10:20:10.111476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:06.710 [2024-11-06 10:20:10.111484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:06.710 [2024-11-06 10:20:10.111491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.111494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe5e550) 00:28:06.710 [2024-11-06 10:20:10.111501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.710 [2024-11-06 10:20:10.111513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0100, cid 0, qid 0 00:28:06.710 [2024-11-06 10:20:10.111518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0280, cid 1, qid 0 00:28:06.710 [2024-11-06 10:20:10.111523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0400, cid 2, qid 0 00:28:06.710 [2024-11-06 10:20:10.111528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0580, cid 3, qid 0 00:28:06.710 [2024-11-06 10:20:10.111533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0700, cid 4, qid 0 00:28:06.710 [2024-11-06 10:20:10.111735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.710 [2024-11-06 10:20:10.111741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.710 [2024-11-06 10:20:10.111745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.111749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0700) on tqpair=0xe5e550 00:28:06.710 [2024-11-06 10:20:10.111756] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:28:06.710 [2024-11-06 10:20:10.111762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:06.710 [2024-11-06 10:20:10.111770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:28:06.710 [2024-11-06 10:20:10.111777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:06.710 [2024-11-06 10:20:10.111783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.111787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.111791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe5e550) 00:28:06.710 [2024-11-06 10:20:10.111797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:06.710 [2024-11-06 10:20:10.111808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0700, cid 4, qid 0 00:28:06.710 [2024-11-06 10:20:10.112009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.710 [2024-11-06 10:20:10.112016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.710 [2024-11-06 10:20:10.112020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.112023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0700) on tqpair=0xe5e550 00:28:06.710 [2024-11-06 10:20:10.112088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:28:06.710 [2024-11-06 10:20:10.112098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:06.710 [2024-11-06 10:20:10.112106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.112110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe5e550) 00:28:06.710 [2024-11-06 10:20:10.112118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.710 [2024-11-06 10:20:10.112129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0700, cid 4, qid 0 00:28:06.710 [2024-11-06 10:20:10.112289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.710 [2024-11-06 10:20:10.112296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.710 [2024-11-06 10:20:10.112299] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.112303] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe5e550): datao=0, datal=4096, cccid=4 00:28:06.710 [2024-11-06 10:20:10.112307] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xec0700) on tqpair(0xe5e550): expected_datao=0, payload_size=4096 00:28:06.710 [2024-11-06 10:20:10.112312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.112319] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.112322] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.112527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.710 [2024-11-06 10:20:10.112533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.710 [2024-11-06 10:20:10.112537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.112541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0700) on tqpair=0xe5e550 00:28:06.710 [2024-11-06 10:20:10.112549] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:28:06.710 [2024-11-06 10:20:10.112563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:28:06.710 [2024-11-06 10:20:10.112573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:28:06.710 [2024-11-06 10:20:10.112580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.112584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe5e550) 00:28:06.710 [2024-11-06 10:20:10.112590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.710 [2024-11-06 10:20:10.112600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0700, cid 4, qid 0 00:28:06.710 [2024-11-06 10:20:10.112803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.710 [2024-11-06 10:20:10.112810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.710 [2024-11-06 10:20:10.112813] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.710 [2024-11-06 10:20:10.112817] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe5e550): datao=0, datal=4096, cccid=4 00:28:06.710 [2024-11-06 10:20:10.112821] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xec0700) on tqpair(0xe5e550): expected_datao=0, payload_size=4096 00:28:06.711 [2024-11-06 10:20:10.112825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.112832] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.112836] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.116872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.711 [2024-11-06 10:20:10.116881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.711 [2024-11-06 10:20:10.116884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.116888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0700) on tqpair=0xe5e550 00:28:06.711 [2024-11-06 10:20:10.116902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:06.711 [2024-11-06 10:20:10.116911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:06.711 [2024-11-06 10:20:10.116921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.116925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe5e550) 00:28:06.711 [2024-11-06 10:20:10.116931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.711 [2024-11-06 10:20:10.116943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0700, cid 4, qid 0 00:28:06.711 [2024-11-06 10:20:10.117143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.711 [2024-11-06 10:20:10.117150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.711 [2024-11-06 10:20:10.117154] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.117157] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe5e550): datao=0, datal=4096, cccid=4 00:28:06.711 [2024-11-06 10:20:10.117162] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xec0700) on tqpair(0xe5e550): expected_datao=0, payload_size=4096 00:28:06.711 [2024-11-06 10:20:10.117166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.117179] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.117183] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.158047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.711 [2024-11-06 10:20:10.158056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.711 [2024-11-06 10:20:10.158060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.158063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0700) on tqpair=0xe5e550 00:28:06.711 [2024-11-06 10:20:10.158071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:06.711 [2024-11-06 10:20:10.158079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:28:06.711 [2024-11-06 10:20:10.158089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:28:06.711 [2024-11-06 10:20:10.158096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:06.711 [2024-11-06 10:20:10.158101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:06.711 [2024-11-06 10:20:10.158107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:28:06.711 [2024-11-06 10:20:10.158112] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:28:06.711 [2024-11-06 10:20:10.158117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:28:06.711 [2024-11-06 10:20:10.158122] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:28:06.711 [2024-11-06 10:20:10.158137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.158141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe5e550) 00:28:06.711 [2024-11-06 10:20:10.158148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.711 [2024-11-06 10:20:10.158155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.158159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.158162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe5e550) 00:28:06.711 [2024-11-06 10:20:10.158170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.711 [2024-11-06 10:20:10.158185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0700, cid 4, qid 0 00:28:06.711 [2024-11-06 10:20:10.158190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0880, cid 5, qid 0 00:28:06.711 [2024-11-06 10:20:10.158378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.711 [2024-11-06 10:20:10.158384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.711 [2024-11-06 10:20:10.158387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.158391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0700) on tqpair=0xe5e550 00:28:06.711 [2024-11-06 10:20:10.158398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.711 [2024-11-06 10:20:10.158404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.711 [2024-11-06 10:20:10.158407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.158411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0880) on tqpair=0xe5e550 00:28:06.711 [2024-11-06 10:20:10.158420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.711 [2024-11-06 10:20:10.158423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe5e550) 00:28:06.711 [2024-11-06 10:20:10.158430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.712 [2024-11-06 10:20:10.158440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0880, cid 5, qid 0 00:28:06.712 [2024-11-06 10:20:10.158637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.712 [2024-11-06 10:20:10.158643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.712 [2024-11-06 10:20:10.158647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.158650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0880) on tqpair=0xe5e550 00:28:06.712 [2024-11-06 10:20:10.158659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.158663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe5e550) 00:28:06.712 [2024-11-06 10:20:10.158670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.712 [2024-11-06 10:20:10.158679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0880, cid 5, qid 0 00:28:06.712 [2024-11-06 10:20:10.158910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.712 [2024-11-06 10:20:10.158917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.712 [2024-11-06 10:20:10.158920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.158924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0880) on tqpair=0xe5e550 00:28:06.712 [2024-11-06 10:20:10.158933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.158937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe5e550) 00:28:06.712 [2024-11-06 10:20:10.158943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.712 [2024-11-06 10:20:10.158953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0880, cid 5, qid 0 00:28:06.712 [2024-11-06 10:20:10.159183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.712 [2024-11-06 10:20:10.159189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.712 [2024-11-06 10:20:10.159192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0880) on tqpair=0xe5e550 00:28:06.712 [2024-11-06 10:20:10.159211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe5e550) 00:28:06.712 [2024-11-06 10:20:10.159225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.712 [2024-11-06 10:20:10.159233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe5e550) 00:28:06.712 [2024-11-06 10:20:10.159243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.712 [2024-11-06 10:20:10.159250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xe5e550) 00:28:06.712 [2024-11-06 10:20:10.159260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.712 [2024-11-06 10:20:10.159267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe5e550) 00:28:06.712 [2024-11-06 10:20:10.159277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.712 [2024-11-06 10:20:10.159288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0880, cid 5, qid 0 00:28:06.712 [2024-11-06 10:20:10.159294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0700, cid 4, qid 0 00:28:06.712 [2024-11-06 10:20:10.159298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0a00, cid 6, qid 0 00:28:06.712 [2024-11-06 10:20:10.159303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0b80, cid 7, qid 0 00:28:06.712 [2024-11-06 10:20:10.159540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.712 [2024-11-06 10:20:10.159547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.712 [2024-11-06 10:20:10.159550] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159554] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe5e550): datao=0, datal=8192, cccid=5 00:28:06.712 [2024-11-06 10:20:10.159558] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xec0880) on tqpair(0xe5e550): expected_datao=0, payload_size=8192 00:28:06.712 [2024-11-06 10:20:10.159563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159644] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159648] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.712 [2024-11-06 10:20:10.159659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.712 [2024-11-06 10:20:10.159663] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159666] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe5e550): datao=0, datal=512, cccid=4 00:28:06.712 [2024-11-06 10:20:10.159671] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xec0700) on tqpair(0xe5e550): expected_datao=0, payload_size=512 00:28:06.712 [2024-11-06 10:20:10.159675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159681] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159685] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.712 [2024-11-06 10:20:10.159696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.712 [2024-11-06 10:20:10.159699] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159703] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe5e550): datao=0, datal=512, cccid=6 00:28:06.712 [2024-11-06 10:20:10.159710] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xec0a00) on tqpair(0xe5e550): expected_datao=0, payload_size=512 00:28:06.712 [2024-11-06 10:20:10.159714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159721] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159724] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:06.712 [2024-11-06 10:20:10.159735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:06.712 [2024-11-06 10:20:10.159739] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159742] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe5e550): datao=0, datal=4096, cccid=7 00:28:06.712 [2024-11-06 10:20:10.159746] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xec0b80) on tqpair(0xe5e550): expected_datao=0, payload_size=4096 00:28:06.712 [2024-11-06 10:20:10.159751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159757] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159761] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.712 [2024-11-06 10:20:10.159774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.712 [2024-11-06 10:20:10.159778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.712 [2024-11-06 10:20:10.159782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0880) on tqpair=0xe5e550 00:28:06.712 [2024-11-06 10:20:10.159793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.713 [2024-11-06 10:20:10.159799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.713 [2024-11-06 10:20:10.159803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.713 [2024-11-06 10:20:10.159806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0700) on tqpair=0xe5e550 00:28:06.713 [2024-11-06 10:20:10.159816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.713 [2024-11-06 10:20:10.159822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.713 [2024-11-06 10:20:10.159825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.713 [2024-11-06 10:20:10.159829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0a00) on tqpair=0xe5e550 00:28:06.713 [2024-11-06 10:20:10.159836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.713 [2024-11-06 10:20:10.159842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.713 [2024-11-06 10:20:10.159845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.713 [2024-11-06 10:20:10.159849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0b80) on tqpair=0xe5e550 00:28:06.713 ===================================================== 00:28:06.713 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.713 ===================================================== 00:28:06.713 Controller Capabilities/Features 00:28:06.713 ================================ 00:28:06.713 Vendor ID: 8086 00:28:06.713 Subsystem Vendor ID: 8086 00:28:06.713 Serial Number: SPDK00000000000001 00:28:06.713 Model Number: SPDK bdev Controller 00:28:06.713 Firmware Version: 25.01 00:28:06.713 Recommended Arb Burst: 6 00:28:06.713 IEEE OUI Identifier: e4 d2 5c 00:28:06.713 Multi-path I/O 00:28:06.713 May have multiple subsystem ports: Yes 00:28:06.713 May have multiple controllers: Yes 00:28:06.713 Associated with SR-IOV VF: No 00:28:06.713 Max Data Transfer Size: 131072 00:28:06.713 Max Number of Namespaces: 32 00:28:06.713 Max Number of I/O Queues: 127 00:28:06.713 NVMe Specification Version (VS): 1.3 00:28:06.713 NVMe Specification Version (Identify): 1.3 00:28:06.713 Maximum Queue Entries: 128 00:28:06.713 Contiguous Queues Required: Yes 00:28:06.713 Arbitration Mechanisms Supported 00:28:06.713 Weighted Round Robin: Not Supported 00:28:06.713 Vendor Specific: Not Supported 00:28:06.713 Reset Timeout: 15000 ms 00:28:06.713 Doorbell Stride: 4 bytes 00:28:06.713 NVM Subsystem Reset: Not Supported 00:28:06.713 Command Sets Supported 00:28:06.713 NVM Command Set: Supported 00:28:06.713 Boot Partition: Not Supported 00:28:06.713 Memory Page Size Minimum: 4096 bytes 00:28:06.713 Memory Page Size Maximum: 4096 bytes 00:28:06.713 Persistent Memory Region: Not Supported 00:28:06.713 Optional Asynchronous Events Supported 00:28:06.713 Namespace Attribute Notices: Supported 00:28:06.713 Firmware Activation Notices: Not Supported 00:28:06.713 ANA Change Notices: Not Supported 00:28:06.713 PLE Aggregate Log Change Notices: Not Supported 00:28:06.713 LBA Status Info Alert Notices: Not Supported 00:28:06.713 EGE Aggregate Log Change Notices: Not Supported 00:28:06.713 Normal NVM Subsystem Shutdown event: Not Supported 00:28:06.713 Zone Descriptor Change Notices: Not Supported 00:28:06.713 Discovery Log Change Notices: Not Supported 00:28:06.713 Controller Attributes 00:28:06.713 128-bit Host Identifier: Supported 00:28:06.713 Non-Operational Permissive Mode: Not Supported 00:28:06.713 NVM Sets: Not Supported 00:28:06.713 Read Recovery Levels: Not Supported 00:28:06.713 Endurance Groups: Not Supported 00:28:06.713 Predictable Latency Mode: Not Supported 00:28:06.713 Traffic Based Keep ALive: Not Supported 00:28:06.713 Namespace Granularity: Not Supported 00:28:06.713 SQ Associations: Not Supported 00:28:06.713 UUID List: Not Supported 00:28:06.713 Multi-Domain Subsystem: Not Supported 00:28:06.713 Fixed Capacity Management: Not Supported 00:28:06.713 Variable Capacity Management: Not Supported 00:28:06.713 Delete Endurance Group: Not Supported 00:28:06.713 Delete NVM Set: Not Supported 00:28:06.713 Extended LBA Formats Supported: Not Supported 00:28:06.713 Flexible Data Placement Supported: Not Supported 00:28:06.713 00:28:06.713 Controller Memory Buffer Support 00:28:06.713 ================================ 00:28:06.713 Supported: No 00:28:06.713 00:28:06.713 Persistent Memory Region Support 00:28:06.713 ================================ 00:28:06.713 Supported: No 00:28:06.713 00:28:06.713 Admin Command Set Attributes 00:28:06.713 ============================ 00:28:06.713 Security Send/Receive: Not Supported 00:28:06.713 Format NVM: Not Supported 00:28:06.713 Firmware Activate/Download: Not Supported 00:28:06.713 Namespace Management: Not Supported 00:28:06.713 Device Self-Test: Not Supported 00:28:06.713 Directives: Not Supported 00:28:06.713 NVMe-MI: Not Supported 00:28:06.713 Virtualization Management: Not Supported 00:28:06.713 Doorbell Buffer Config: Not Supported 00:28:06.713 Get LBA Status Capability: Not Supported 00:28:06.713 Command & Feature Lockdown Capability: Not Supported 00:28:06.713 Abort Command Limit: 4 00:28:06.713 Async Event Request Limit: 4 00:28:06.713 Number of Firmware Slots: N/A 00:28:06.713 Firmware Slot 1 Read-Only: N/A 00:28:06.713 Firmware Activation Without Reset: N/A 00:28:06.713 Multiple Update Detection Support: N/A 00:28:06.713 Firmware Update Granularity: No Information Provided 00:28:06.713 Per-Namespace SMART Log: No 00:28:06.713 Asymmetric Namespace Access Log Page: Not Supported 00:28:06.713 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:06.713 Command Effects Log Page: Supported 00:28:06.713 Get Log Page Extended Data: Supported 00:28:06.713 Telemetry Log Pages: Not Supported 00:28:06.713 Persistent Event Log Pages: Not Supported 00:28:06.713 Supported Log Pages Log Page: May Support 00:28:06.713 Commands Supported & Effects Log Page: Not Supported 00:28:06.713 Feature Identifiers & Effects Log Page:May Support 00:28:06.713 NVMe-MI Commands & Effects Log Page: May Support 00:28:06.713 Data Area 4 for Telemetry Log: Not Supported 00:28:06.713 Error Log Page Entries Supported: 128 00:28:06.713 Keep Alive: Supported 00:28:06.713 Keep Alive Granularity: 10000 ms 00:28:06.713 00:28:06.713 NVM Command Set Attributes 00:28:06.713 ========================== 00:28:06.713 Submission Queue Entry Size 00:28:06.714 Max: 64 00:28:06.714 Min: 64 00:28:06.714 Completion Queue Entry Size 00:28:06.714 Max: 16 00:28:06.714 Min: 16 00:28:06.714 Number of Namespaces: 32 00:28:06.714 Compare Command: Supported 00:28:06.714 Write Uncorrectable Command: Not Supported 00:28:06.714 Dataset Management Command: Supported 00:28:06.714 Write Zeroes Command: Supported 00:28:06.714 Set Features Save Field: Not Supported 00:28:06.714 Reservations: Supported 00:28:06.714 Timestamp: Not Supported 00:28:06.714 Copy: Supported 00:28:06.714 Volatile Write Cache: Present 00:28:06.714 Atomic Write Unit (Normal): 1 00:28:06.714 Atomic Write Unit (PFail): 1 00:28:06.714 Atomic Compare & Write Unit: 1 00:28:06.714 Fused Compare & Write: Supported 00:28:06.714 Scatter-Gather List 00:28:06.714 SGL Command Set: Supported 00:28:06.714 SGL Keyed: Supported 00:28:06.714 SGL Bit Bucket Descriptor: Not Supported 00:28:06.714 SGL Metadata Pointer: Not Supported 00:28:06.714 Oversized SGL: Not Supported 00:28:06.714 SGL Metadata Address: Not Supported 00:28:06.714 SGL Offset: Supported 00:28:06.714 Transport SGL Data Block: Not Supported 00:28:06.714 Replay Protected Memory Block: Not Supported 00:28:06.714 00:28:06.714 Firmware Slot Information 00:28:06.714 ========================= 00:28:06.714 Active slot: 1 00:28:06.714 Slot 1 Firmware Revision: 25.01 00:28:06.714 00:28:06.714 00:28:06.714 Commands Supported and Effects 00:28:06.714 ============================== 00:28:06.714 Admin Commands 00:28:06.714 -------------- 00:28:06.714 Get Log Page (02h): Supported 00:28:06.714 Identify (06h): Supported 00:28:06.714 Abort (08h): Supported 00:28:06.714 Set Features (09h): Supported 00:28:06.714 Get Features (0Ah): Supported 00:28:06.714 Asynchronous Event Request (0Ch): Supported 00:28:06.714 Keep Alive (18h): Supported 00:28:06.714 I/O Commands 00:28:06.714 ------------ 00:28:06.714 Flush (00h): Supported LBA-Change 00:28:06.714 Write (01h): Supported LBA-Change 00:28:06.714 Read (02h): Supported 00:28:06.714 Compare (05h): Supported 00:28:06.714 Write Zeroes (08h): Supported LBA-Change 00:28:06.714 Dataset Management (09h): Supported LBA-Change 00:28:06.714 Copy (19h): Supported LBA-Change 00:28:06.714 00:28:06.714 Error Log 00:28:06.714 ========= 00:28:06.714 00:28:06.714 Arbitration 00:28:06.714 =========== 00:28:06.714 Arbitration Burst: 1 00:28:06.714 00:28:06.714 Power Management 00:28:06.714 ================ 00:28:06.714 Number of Power States: 1 00:28:06.714 Current Power State: Power State #0 00:28:06.714 Power State #0: 00:28:06.714 Max Power: 0.00 W 00:28:06.714 Non-Operational State: Operational 00:28:06.714 Entry Latency: Not Reported 00:28:06.714 Exit Latency: Not Reported 00:28:06.714 Relative Read Throughput: 0 00:28:06.714 Relative Read Latency: 0 00:28:06.714 Relative Write Throughput: 0 00:28:06.714 Relative Write Latency: 0 00:28:06.714 Idle Power: Not Reported 00:28:06.714 Active Power: Not Reported 00:28:06.714 Non-Operational Permissive Mode: Not Supported 00:28:06.714 00:28:06.714 Health Information 00:28:06.714 ================== 00:28:06.714 Critical Warnings: 00:28:06.714 Available Spare Space: OK 00:28:06.714 Temperature: OK 00:28:06.714 Device Reliability: OK 00:28:06.714 Read Only: No 00:28:06.714 Volatile Memory Backup: OK 00:28:06.714 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:06.714 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:06.714 Available Spare: 0% 00:28:06.714 Available Spare Threshold: 0% 00:28:06.714 Life Percentage Used:[2024-11-06 10:20:10.159953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.714 [2024-11-06 10:20:10.159958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe5e550) 00:28:06.714 [2024-11-06 10:20:10.159965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.714 [2024-11-06 10:20:10.159977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0b80, cid 7, qid 0 00:28:06.714 [2024-11-06 10:20:10.160158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.714 [2024-11-06 10:20:10.160164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.714 [2024-11-06 10:20:10.160168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.714 [2024-11-06 10:20:10.160171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0b80) on tqpair=0xe5e550 00:28:06.714 [2024-11-06 10:20:10.160203] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:28:06.714 [2024-11-06 10:20:10.160216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0100) on tqpair=0xe5e550 00:28:06.714 [2024-11-06 10:20:10.160224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.714 [2024-11-06 10:20:10.160229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0280) on tqpair=0xe5e550 00:28:06.714 [2024-11-06 10:20:10.160234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.714 [2024-11-06 10:20:10.160239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0400) on tqpair=0xe5e550 00:28:06.714 [2024-11-06 10:20:10.160243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.714 [2024-11-06 10:20:10.160248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0580) on tqpair=0xe5e550 00:28:06.714 [2024-11-06 10:20:10.160253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.714 [2024-11-06 10:20:10.160261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.714 [2024-11-06 10:20:10.160265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.714 [2024-11-06 10:20:10.160269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe5e550) 00:28:06.714 [2024-11-06 10:20:10.160276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.714 [2024-11-06 10:20:10.160288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0580, cid 3, qid 0 00:28:06.714 [2024-11-06 10:20:10.160466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.714 [2024-11-06 10:20:10.160472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.714 [2024-11-06 10:20:10.160476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.715 [2024-11-06 10:20:10.160479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0580) on tqpair=0xe5e550 00:28:06.715 [2024-11-06 10:20:10.160486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.715 [2024-11-06 10:20:10.160490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.715 [2024-11-06 10:20:10.160494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe5e550) 00:28:06.715 [2024-11-06 10:20:10.160500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.715 [2024-11-06 10:20:10.160513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0580, cid 3, qid 0 00:28:06.715 [2024-11-06 10:20:10.160676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.715 [2024-11-06 10:20:10.160683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.715 [2024-11-06 10:20:10.160686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.715 [2024-11-06 10:20:10.160690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0580) on tqpair=0xe5e550 00:28:06.715 [2024-11-06 10:20:10.160695] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:28:06.715 [2024-11-06 10:20:10.160701] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:28:06.715 [2024-11-06 10:20:10.160710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:06.715 [2024-11-06 10:20:10.160714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:06.715 [2024-11-06 10:20:10.160718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe5e550) 00:28:06.715 [2024-11-06 10:20:10.160724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.715 [2024-11-06 10:20:10.160734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xec0580, cid 3, qid 0 00:28:06.715 [2024-11-06 10:20:10.164872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:06.715 [2024-11-06 10:20:10.164881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:06.715 [2024-11-06 10:20:10.164887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:06.715 [2024-11-06 10:20:10.164891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xec0580) on tqpair=0xe5e550 00:28:06.715 [2024-11-06 10:20:10.164899] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:28:06.715 0% 00:28:06.715 Data Units Read: 0 00:28:06.715 Data Units Written: 0 00:28:06.715 Host Read Commands: 0 00:28:06.715 Host Write Commands: 0 00:28:06.715 Controller Busy Time: 0 minutes 00:28:06.715 Power Cycles: 0 00:28:06.715 Power On Hours: 0 hours 00:28:06.715 Unsafe Shutdowns: 0 00:28:06.715 Unrecoverable Media Errors: 0 00:28:06.715 Lifetime Error Log Entries: 0 00:28:06.715 Warning Temperature Time: 0 minutes 00:28:06.715 Critical Temperature Time: 0 minutes 00:28:06.715 00:28:06.715 Number of Queues 00:28:06.715 ================ 00:28:06.715 Number of I/O Submission Queues: 127 00:28:06.715 Number of I/O Completion Queues: 127 00:28:06.715 00:28:06.715 Active Namespaces 00:28:06.715 ================= 00:28:06.715 Namespace ID:1 00:28:06.715 Error Recovery Timeout: Unlimited 00:28:06.715 Command Set Identifier: NVM (00h) 00:28:06.715 Deallocate: Supported 00:28:06.715 Deallocated/Unwritten Error: Not Supported 00:28:06.715 Deallocated Read Value: Unknown 00:28:06.715 Deallocate in Write Zeroes: Not Supported 00:28:06.715 Deallocated Guard Field: 0xFFFF 00:28:06.715 Flush: Supported 00:28:06.715 Reservation: Supported 00:28:06.715 Namespace Sharing Capabilities: Multiple Controllers 00:28:06.715 Size (in LBAs): 131072 (0GiB) 00:28:06.715 Capacity (in LBAs): 131072 (0GiB) 00:28:06.715 Utilization (in LBAs): 131072 (0GiB) 00:28:06.715 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:06.715 EUI64: ABCDEF0123456789 00:28:06.715 UUID: 9c0584ab-ff94-42e9-9688-55f257768634 00:28:06.715 Thin Provisioning: Not Supported 00:28:06.715 Per-NS Atomic Units: Yes 00:28:06.715 Atomic Boundary Size (Normal): 0 00:28:06.715 Atomic Boundary Size (PFail): 0 00:28:06.715 Atomic Boundary Offset: 0 00:28:06.715 Maximum Single Source Range Length: 65535 00:28:06.715 Maximum Copy Length: 65535 00:28:06.715 Maximum Source Range Count: 1 00:28:06.715 NGUID/EUI64 Never Reused: No 00:28:06.715 Namespace Write Protected: No 00:28:06.715 Number of LBA Formats: 1 00:28:06.715 Current LBA Format: LBA Format #00 00:28:06.715 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:06.715 00:28:06.715 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:06.715 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.715 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.715 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:06.715 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.715 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:06.715 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:06.715 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:06.715 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:06.977 rmmod nvme_tcp 00:28:06.977 rmmod nvme_fabrics 00:28:06.977 rmmod nvme_keyring 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3995889 ']' 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3995889 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3995889 ']' 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3995889 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3995889 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3995889' 00:28:06.977 killing process with pid 3995889 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3995889 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3995889 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.977 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.239 10:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.152 10:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.152 00:28:09.152 real 0m12.523s 00:28:09.152 user 0m8.738s 00:28:09.152 sys 0m6.780s 00:28:09.152 10:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:09.152 10:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:09.152 ************************************ 00:28:09.152 END TEST nvmf_identify 00:28:09.152 ************************************ 00:28:09.152 10:20:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:09.152 10:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:09.152 10:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:09.152 10:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.152 ************************************ 00:28:09.152 START TEST nvmf_perf 00:28:09.152 ************************************ 00:28:09.152 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:09.413 * Looking for test storage... 00:28:09.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.413 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:09.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.414 --rc genhtml_branch_coverage=1 00:28:09.414 --rc genhtml_function_coverage=1 00:28:09.414 --rc genhtml_legend=1 00:28:09.414 --rc geninfo_all_blocks=1 00:28:09.414 --rc geninfo_unexecuted_blocks=1 00:28:09.414 00:28:09.414 ' 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:09.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.414 --rc genhtml_branch_coverage=1 00:28:09.414 --rc genhtml_function_coverage=1 00:28:09.414 --rc genhtml_legend=1 00:28:09.414 --rc geninfo_all_blocks=1 00:28:09.414 --rc geninfo_unexecuted_blocks=1 00:28:09.414 00:28:09.414 ' 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:09.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.414 --rc genhtml_branch_coverage=1 00:28:09.414 --rc genhtml_function_coverage=1 00:28:09.414 --rc genhtml_legend=1 00:28:09.414 --rc geninfo_all_blocks=1 00:28:09.414 --rc geninfo_unexecuted_blocks=1 00:28:09.414 00:28:09.414 ' 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:09.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.414 --rc genhtml_branch_coverage=1 00:28:09.414 --rc genhtml_function_coverage=1 00:28:09.414 --rc genhtml_legend=1 00:28:09.414 --rc geninfo_all_blocks=1 00:28:09.414 --rc geninfo_unexecuted_blocks=1 00:28:09.414 00:28:09.414 ' 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:09.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:09.414 10:20:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:17.552 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:17.552 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:17.552 Found net devices under 0000:31:00.0: cvl_0_0 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:17.552 Found net devices under 0000:31:00.1: cvl_0_1 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.552 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.813 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.813 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.813 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:17.813 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.813 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.813 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.813 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:17.813 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:18.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:28:18.074 00:28:18.074 --- 10.0.0.2 ping statistics --- 00:28:18.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.074 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:28:18.074 00:28:18.074 --- 10.0.0.1 ping statistics --- 00:28:18.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.074 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=4000930 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 4000930 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 4000930 ']' 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:18.074 10:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:18.074 [2024-11-06 10:20:21.436070] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:18.074 [2024-11-06 10:20:21.436119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.074 [2024-11-06 10:20:21.522113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:18.074 [2024-11-06 10:20:21.558098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.074 [2024-11-06 10:20:21.558131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.074 [2024-11-06 10:20:21.558140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.074 [2024-11-06 10:20:21.558147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.074 [2024-11-06 10:20:21.558153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.074 [2024-11-06 10:20:21.559698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.074 [2024-11-06 10:20:21.559813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:18.074 [2024-11-06 10:20:21.559940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:18.074 [2024-11-06 10:20:21.560065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.016 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:19.016 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:28:19.016 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.016 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:19.016 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:19.016 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.016 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:19.016 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:19.276 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:19.277 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:19.537 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:28:19.537 10:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:19.797 10:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:19.797 10:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:28:19.797 10:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:19.797 10:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:19.797 10:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:20.057 [2024-11-06 10:20:23.319952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.057 10:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:20.057 10:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:20.058 10:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:20.317 10:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:20.318 10:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:20.578 10:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.578 [2024-11-06 10:20:24.058594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.839 10:20:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:20.839 10:20:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:28:20.839 10:20:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:20.839 10:20:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:20.839 10:20:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:22.223 Initializing NVMe Controllers 00:28:22.223 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:28:22.223 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:28:22.223 Initialization complete. Launching workers. 00:28:22.223 ======================================================== 00:28:22.223 Latency(us) 00:28:22.223 Device Information : IOPS MiB/s Average min max 00:28:22.223 PCIE (0000:65:00.0) NSID 1 from core 0: 78740.21 307.58 405.87 13.38 8193.67 00:28:22.223 ======================================================== 00:28:22.223 Total : 78740.21 307.58 405.87 13.38 8193.67 00:28:22.223 00:28:22.223 10:20:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:23.608 Initializing NVMe Controllers 00:28:23.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:23.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:23.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:23.608 Initialization complete. Launching workers. 00:28:23.608 ======================================================== 00:28:23.608 Latency(us) 00:28:23.608 Device Information : IOPS MiB/s Average min max 00:28:23.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.79 0.31 12931.90 124.25 45253.27 00:28:23.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 69.82 0.27 14778.97 7954.75 47890.67 00:28:23.608 ======================================================== 00:28:23.608 Total : 149.61 0.58 13793.87 124.25 47890.67 00:28:23.608 00:28:23.608 10:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.992 Initializing NVMe Controllers 00:28:24.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:24.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:24.992 Initialization complete. Launching workers. 00:28:24.992 ======================================================== 00:28:24.992 Latency(us) 00:28:24.992 Device Information : IOPS MiB/s Average min max 00:28:24.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10539.43 41.17 3036.16 568.93 6471.88 00:28:24.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3845.79 15.02 8365.02 6100.15 16584.93 00:28:24.992 ======================================================== 00:28:24.992 Total : 14385.22 56.19 4460.79 568.93 16584.93 00:28:24.992 00:28:24.992 10:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:24.992 10:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:24.992 10:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:27.535 Initializing NVMe Controllers 00:28:27.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.535 Controller IO queue size 128, less than required. 00:28:27.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.535 Controller IO queue size 128, less than required. 00:28:27.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:27.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:27.535 Initialization complete. Launching workers. 00:28:27.535 ======================================================== 00:28:27.535 Latency(us) 00:28:27.535 Device Information : IOPS MiB/s Average min max 00:28:27.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1620.77 405.19 80092.65 46882.57 119561.93 00:28:27.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.24 146.56 235666.06 74202.02 366178.54 00:28:27.535 ======================================================== 00:28:27.535 Total : 2207.00 551.75 121416.84 46882.57 366178.54 00:28:27.535 00:28:27.535 10:20:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:27.535 No valid NVMe controllers or AIO or URING devices found 00:28:27.535 Initializing NVMe Controllers 00:28:27.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.535 Controller IO queue size 128, less than required. 00:28:27.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.535 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:27.535 Controller IO queue size 128, less than required. 00:28:27.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.535 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:27.535 WARNING: Some requested NVMe devices were skipped 00:28:27.535 10:20:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:30.217 Initializing NVMe Controllers 00:28:30.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.217 Controller IO queue size 128, less than required. 00:28:30.217 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:30.217 Controller IO queue size 128, less than required. 00:28:30.217 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:30.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:30.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:30.217 Initialization complete. Launching workers. 00:28:30.217 00:28:30.217 ==================== 00:28:30.217 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:30.218 TCP transport: 00:28:30.218 polls: 21020 00:28:30.218 idle_polls: 11164 00:28:30.218 sock_completions: 9856 00:28:30.218 nvme_completions: 6697 00:28:30.218 submitted_requests: 10026 00:28:30.218 queued_requests: 1 00:28:30.218 00:28:30.218 ==================== 00:28:30.218 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:30.218 TCP transport: 00:28:30.218 polls: 24083 00:28:30.218 idle_polls: 13605 00:28:30.218 sock_completions: 10478 00:28:30.218 nvme_completions: 6625 00:28:30.218 submitted_requests: 9816 00:28:30.218 queued_requests: 1 00:28:30.218 ======================================================== 00:28:30.218 Latency(us) 00:28:30.218 Device Information : IOPS MiB/s Average min max 00:28:30.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1674.00 418.50 78139.34 53304.21 127066.41 00:28:30.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1656.00 414.00 78454.03 37296.68 111100.34 00:28:30.218 ======================================================== 00:28:30.218 Total : 3330.00 832.50 78295.84 37296.68 127066.41 00:28:30.218 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.218 rmmod nvme_tcp 00:28:30.218 rmmod nvme_fabrics 00:28:30.218 rmmod nvme_keyring 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 4000930 ']' 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 4000930 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 4000930 ']' 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 4000930 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:30.218 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4000930 00:28:30.478 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:30.478 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:30.478 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4000930' 00:28:30.478 killing process with pid 4000930 00:28:30.478 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 4000930 00:28:30.478 10:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 4000930 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.392 10:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.939 10:20:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:34.939 00:28:34.939 real 0m25.192s 00:28:34.939 user 0m58.457s 00:28:34.939 sys 0m9.189s 00:28:34.939 10:20:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:34.939 10:20:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:34.939 ************************************ 00:28:34.939 END TEST nvmf_perf 00:28:34.939 ************************************ 00:28:34.939 10:20:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:34.939 10:20:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:34.939 10:20:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:34.939 10:20:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.939 ************************************ 00:28:34.939 START TEST nvmf_fio_host 00:28:34.939 ************************************ 00:28:34.939 10:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:34.939 * Looking for test storage... 00:28:34.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.939 10:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:34.939 10:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:28:34.939 10:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:34.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.939 --rc genhtml_branch_coverage=1 00:28:34.939 --rc genhtml_function_coverage=1 00:28:34.939 --rc genhtml_legend=1 00:28:34.939 --rc geninfo_all_blocks=1 00:28:34.939 --rc geninfo_unexecuted_blocks=1 00:28:34.939 00:28:34.939 ' 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:34.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.939 --rc genhtml_branch_coverage=1 00:28:34.939 --rc genhtml_function_coverage=1 00:28:34.939 --rc genhtml_legend=1 00:28:34.939 --rc geninfo_all_blocks=1 00:28:34.939 --rc geninfo_unexecuted_blocks=1 00:28:34.939 00:28:34.939 ' 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:34.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.939 --rc genhtml_branch_coverage=1 00:28:34.939 --rc genhtml_function_coverage=1 00:28:34.939 --rc genhtml_legend=1 00:28:34.939 --rc geninfo_all_blocks=1 00:28:34.939 --rc geninfo_unexecuted_blocks=1 00:28:34.939 00:28:34.939 ' 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:34.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.939 --rc genhtml_branch_coverage=1 00:28:34.939 --rc genhtml_function_coverage=1 00:28:34.939 --rc genhtml_legend=1 00:28:34.939 --rc geninfo_all_blocks=1 00:28:34.939 --rc geninfo_unexecuted_blocks=1 00:28:34.939 00:28:34.939 ' 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.939 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:34.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.940 10:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:43.086 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:43.086 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:43.086 Found net devices under 0000:31:00.0: cvl_0_0 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:43.086 Found net devices under 0000:31:00.1: cvl_0_1 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.086 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:43.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:28:43.087 00:28:43.087 --- 10.0.0.2 ping statistics --- 00:28:43.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.087 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:28:43.087 00:28:43.087 --- 10.0.0.1 ping statistics --- 00:28:43.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.087 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4008358 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4008358 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 4008358 ']' 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:43.087 10:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.087 [2024-11-06 10:20:46.552820] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:43.087 [2024-11-06 10:20:46.552877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.348 [2024-11-06 10:20:46.639148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:43.348 [2024-11-06 10:20:46.675125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.348 [2024-11-06 10:20:46.675155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.348 [2024-11-06 10:20:46.675163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.348 [2024-11-06 10:20:46.675170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.348 [2024-11-06 10:20:46.675176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.348 [2024-11-06 10:20:46.676739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.348 [2024-11-06 10:20:46.676849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.348 [2024-11-06 10:20:46.677024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.348 [2024-11-06 10:20:46.677025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.920 10:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:43.920 10:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:28:43.920 10:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:44.181 [2024-11-06 10:20:47.531243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.181 10:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:44.181 10:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:44.181 10:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.181 10:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:44.442 Malloc1 00:28:44.442 10:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:44.703 10:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:44.703 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.964 [2024-11-06 10:20:48.320632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.964 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:45.226 10:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:45.487 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:45.487 fio-3.35 00:28:45.487 Starting 1 thread 00:28:48.031 00:28:48.031 test: (groupid=0, jobs=1): err= 0: pid=4009210: Wed Nov 6 10:20:51 2024 00:28:48.031 read: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(108MiB/2005msec) 00:28:48.031 slat (usec): min=2, max=277, avg= 2.15, stdev= 2.41 00:28:48.031 clat (usec): min=3871, max=8781, avg=5114.64, stdev=365.99 00:28:48.031 lat (usec): min=3873, max=8787, avg=5116.79, stdev=366.20 00:28:48.031 clat percentiles (usec): 00:28:48.031 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:28:48.031 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:28:48.031 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:28:48.031 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 8029], 99.95th=[ 8455], 00:28:48.031 | 99.99th=[ 8717] 00:28:48.031 bw ( KiB/s): min=53520, max=55664, per=100.00%, avg=55086.00, stdev=1046.59, samples=4 00:28:48.031 iops : min=13380, max=13916, avg=13771.50, stdev=261.65, samples=4 00:28:48.031 write: IOPS=13.8k, BW=53.7MiB/s (56.3MB/s)(108MiB/2005msec); 0 zone resets 00:28:48.031 slat (usec): min=2, max=270, avg= 2.21, stdev= 1.80 00:28:48.031 clat (usec): min=2917, max=8152, avg=4122.06, stdev=320.86 00:28:48.031 lat (usec): min=2920, max=8154, avg=4124.27, stdev=321.12 00:28:48.031 clat percentiles (usec): 00:28:48.031 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:28:48.031 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:28:48.031 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:28:48.031 | 99.00th=[ 4883], 99.50th=[ 5669], 99.90th=[ 7046], 99.95th=[ 7242], 00:28:48.031 | 99.99th=[ 8094] 00:28:48.031 bw ( KiB/s): min=54008, max=55584, per=100.00%, avg=55012.00, stdev=690.24, samples=4 00:28:48.031 iops : min=13502, max=13896, avg=13753.00, stdev=172.56, samples=4 00:28:48.031 lat (msec) : 4=16.58%, 10=83.42% 00:28:48.031 cpu : usr=78.44%, sys=20.91%, ctx=25, majf=0, minf=17 00:28:48.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:48.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:48.031 issued rwts: total=27604,27572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:48.031 00:28:48.031 Run status group 0 (all jobs): 00:28:48.031 READ: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=108MiB (113MB), run=2005-2005msec 00:28:48.031 WRITE: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2005-2005msec 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:48.031 10:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:48.602 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:48.602 fio-3.35 00:28:48.602 Starting 1 thread 00:28:49.987 [2024-11-06 10:20:53.418020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041930 is same with the state(6) to be set 00:28:50.930 00:28:50.930 test: (groupid=0, jobs=1): err= 0: pid=4009760: Wed Nov 6 10:20:54 2024 00:28:50.930 read: IOPS=9324, BW=146MiB/s (153MB/s)(292MiB/2006msec) 00:28:50.930 slat (usec): min=3, max=110, avg= 3.61, stdev= 1.66 00:28:50.930 clat (usec): min=1085, max=16450, avg=8359.97, stdev=2033.17 00:28:50.930 lat (usec): min=1088, max=16453, avg=8363.58, stdev=2033.34 00:28:50.930 clat percentiles (usec): 00:28:50.930 | 1.00th=[ 4228], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6521], 00:28:50.930 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 8848], 00:28:50.930 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[10814], 95.00th=[11600], 00:28:50.930 | 99.00th=[13042], 99.50th=[13829], 99.90th=[15926], 99.95th=[16188], 00:28:50.930 | 99.99th=[16450] 00:28:50.930 bw ( KiB/s): min=67680, max=83520, per=49.35%, avg=73632.00, stdev=7330.77, samples=4 00:28:50.930 iops : min= 4230, max= 5220, avg=4602.00, stdev=458.17, samples=4 00:28:50.930 write: IOPS=5512, BW=86.1MiB/s (90.3MB/s)(151MiB/1749msec); 0 zone resets 00:28:50.930 slat (usec): min=39, max=449, avg=41.09, stdev= 9.30 00:28:50.930 clat (usec): min=1810, max=16596, avg=9494.65, stdev=1653.01 00:28:50.930 lat (usec): min=1850, max=16728, avg=9535.73, stdev=1655.39 00:28:50.930 clat percentiles (usec): 00:28:50.930 | 1.00th=[ 6456], 5.00th=[ 7177], 10.00th=[ 7701], 20.00th=[ 8225], 00:28:50.930 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:28:50.930 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11600], 95.00th=[12649], 00:28:50.930 | 99.00th=[14615], 99.50th=[15139], 99.90th=[16450], 99.95th=[16581], 00:28:50.930 | 99.99th=[16581] 00:28:50.930 bw ( KiB/s): min=70592, max=87040, per=86.77%, avg=76528.00, stdev=7719.83, samples=4 00:28:50.930 iops : min= 4412, max= 5440, avg=4783.00, stdev=482.49, samples=4 00:28:50.930 lat (msec) : 2=0.06%, 4=0.43%, 10=71.79%, 20=27.72% 00:28:50.930 cpu : usr=83.19%, sys=15.26%, ctx=15, majf=0, minf=39 00:28:50.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:50.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:50.930 issued rwts: total=18705,9641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:50.930 00:28:50.930 Run status group 0 (all jobs): 00:28:50.930 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=292MiB (306MB), run=2006-2006msec 00:28:50.930 WRITE: bw=86.1MiB/s (90.3MB/s), 86.1MiB/s-86.1MiB/s (90.3MB/s-90.3MB/s), io=151MiB (158MB), run=1749-1749msec 00:28:50.930 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.191 rmmod nvme_tcp 00:28:51.191 rmmod nvme_fabrics 00:28:51.191 rmmod nvme_keyring 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.191 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 4008358 ']' 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 4008358 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 4008358 ']' 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 4008358 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4008358 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4008358' 00:28:51.192 killing process with pid 4008358 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 4008358 00:28:51.192 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 4008358 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.452 10:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.364 10:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.365 00:28:53.365 real 0m18.881s 00:28:53.365 user 1m11.537s 00:28:53.365 sys 0m8.226s 00:28:53.365 10:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:53.365 10:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.365 ************************************ 00:28:53.365 END TEST nvmf_fio_host 00:28:53.365 ************************************ 00:28:53.365 10:20:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:53.365 10:20:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:53.365 10:20:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:53.365 10:20:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.365 ************************************ 00:28:53.365 START TEST nvmf_failover 00:28:53.365 ************************************ 00:28:53.365 10:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:53.626 * Looking for test storage... 00:28:53.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:53.626 10:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:53.626 10:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:28:53.626 10:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:53.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.626 --rc genhtml_branch_coverage=1 00:28:53.626 --rc genhtml_function_coverage=1 00:28:53.626 --rc genhtml_legend=1 00:28:53.626 --rc geninfo_all_blocks=1 00:28:53.626 --rc geninfo_unexecuted_blocks=1 00:28:53.626 00:28:53.626 ' 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:53.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.626 --rc genhtml_branch_coverage=1 00:28:53.626 --rc genhtml_function_coverage=1 00:28:53.626 --rc genhtml_legend=1 00:28:53.626 --rc geninfo_all_blocks=1 00:28:53.626 --rc geninfo_unexecuted_blocks=1 00:28:53.626 00:28:53.626 ' 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:53.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.626 --rc genhtml_branch_coverage=1 00:28:53.626 --rc genhtml_function_coverage=1 00:28:53.626 --rc genhtml_legend=1 00:28:53.626 --rc geninfo_all_blocks=1 00:28:53.626 --rc geninfo_unexecuted_blocks=1 00:28:53.626 00:28:53.626 ' 00:28:53.626 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:53.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.626 --rc genhtml_branch_coverage=1 00:28:53.626 --rc genhtml_function_coverage=1 00:28:53.626 --rc genhtml_legend=1 00:28:53.626 --rc geninfo_all_blocks=1 00:28:53.626 --rc geninfo_unexecuted_blocks=1 00:28:53.626 00:28:53.627 ' 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:53.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.627 10:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.774 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:01.775 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:01.775 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:01.775 Found net devices under 0000:31:00.0: cvl_0_0 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:01.775 Found net devices under 0000:31:00.1: cvl_0_1 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:01.775 10:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:01.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:29:01.775 00:29:01.775 --- 10.0.0.2 ping statistics --- 00:29:01.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.775 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:29:01.775 00:29:01.775 --- 10.0.0.1 ping statistics --- 00:29:01.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.775 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=4015157 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 4015157 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 4015157 ']' 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.775 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:01.776 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.776 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:01.776 10:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:01.776 [2024-11-06 10:21:05.225295] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:01.776 [2024-11-06 10:21:05.225364] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.036 [2024-11-06 10:21:05.331698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:02.036 [2024-11-06 10:21:05.382401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.036 [2024-11-06 10:21:05.382460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.036 [2024-11-06 10:21:05.382469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.036 [2024-11-06 10:21:05.382476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.036 [2024-11-06 10:21:05.382482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.036 [2024-11-06 10:21:05.384560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.036 [2024-11-06 10:21:05.384731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.036 [2024-11-06 10:21:05.384732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:02.607 10:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:02.607 10:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:29:02.607 10:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:02.607 10:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:02.607 10:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:02.607 10:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.607 10:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:02.868 [2024-11-06 10:21:06.231876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.868 10:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:03.128 Malloc0 00:29:03.128 10:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:03.388 10:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:03.388 10:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:03.649 [2024-11-06 10:21:06.995888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.649 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:03.910 [2024-11-06 10:21:07.180338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:03.910 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:03.910 [2024-11-06 10:21:07.364909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:03.910 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4015526 00:29:03.910 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:03.910 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:03.910 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4015526 /var/tmp/bdevperf.sock 00:29:03.910 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 4015526 ']' 00:29:03.910 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:03.910 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:03.910 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:03.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:03.910 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:03.910 10:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:04.853 10:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:04.853 10:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:29:04.853 10:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:05.114 NVMe0n1 00:29:05.114 10:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:05.375 00:29:05.375 10:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4016046 00:29:05.375 10:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:05.375 10:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:06.765 10:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.765 [2024-11-06 10:21:10.019504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.765 [2024-11-06 10:21:10.019709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 [2024-11-06 10:21:10.019800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306390 is same with the state(6) to be set 00:29:06.766 10:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:10.067 10:21:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:10.067 00:29:10.067 10:21:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:10.067 10:21:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:13.367 10:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:13.367 [2024-11-06 10:21:16.708788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.367 10:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:14.310 10:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:14.571 [2024-11-06 10:21:17.897401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 [2024-11-06 10:21:17.897540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308090 is same with the state(6) to be set 00:29:14.571 10:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 4016046 00:29:21.159 { 00:29:21.159 "results": [ 00:29:21.159 { 00:29:21.159 "job": "NVMe0n1", 00:29:21.159 "core_mask": "0x1", 00:29:21.159 "workload": "verify", 00:29:21.159 "status": "finished", 00:29:21.159 "verify_range": { 00:29:21.159 "start": 0, 00:29:21.159 "length": 16384 00:29:21.159 }, 00:29:21.159 "queue_depth": 128, 00:29:21.159 "io_size": 4096, 00:29:21.159 "runtime": 15.007767, 00:29:21.159 "iops": 11217.924691927854, 00:29:21.159 "mibps": 43.82001832784318, 00:29:21.159 "io_failed": 5477, 00:29:21.159 "io_timeout": 0, 00:29:21.159 "avg_latency_us": 11022.176153588023, 00:29:21.159 "min_latency_us": 512.0, 00:29:21.159 "max_latency_us": 20534.613333333335 00:29:21.159 } 00:29:21.159 ], 00:29:21.159 "core_count": 1 00:29:21.159 } 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 4015526 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 4015526 ']' 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 4015526 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4015526 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4015526' 00:29:21.159 killing process with pid 4015526 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 4015526 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 4015526 00:29:21.159 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:21.159 [2024-11-06 10:21:07.453963] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:21.159 [2024-11-06 10:21:07.454023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015526 ] 00:29:21.159 [2024-11-06 10:21:07.532017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.159 [2024-11-06 10:21:07.567632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.159 Running I/O for 15 seconds... 00:29:21.159 11481.00 IOPS, 44.85 MiB/s [2024-11-06T09:21:24.660Z] [2024-11-06 10:21:10.020939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.159 [2024-11-06 10:21:10.020974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.159 [2024-11-06 10:21:10.020991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.159 [2024-11-06 10:21:10.021000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.159 [2024-11-06 10:21:10.021010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.159 [2024-11-06 10:21:10.021018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.159 [2024-11-06 10:21:10.021028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.159 [2024-11-06 10:21:10.021035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.159 [2024-11-06 10:21:10.021045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.159 [2024-11-06 10:21:10.021052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.159 [2024-11-06 10:21:10.021062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.159 [2024-11-06 10:21:10.021070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.159 [2024-11-06 10:21:10.021079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.159 [2024-11-06 10:21:10.021087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.159 [2024-11-06 10:21:10.021097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.159 [2024-11-06 10:21:10.021104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.159 [2024-11-06 10:21:10.021113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.159 [2024-11-06 10:21:10.021121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.159 [2024-11-06 10:21:10.021130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.159 [2024-11-06 10:21:10.021137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.160 [2024-11-06 10:21:10.021810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.160 [2024-11-06 10:21:10.021819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.021826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.021836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.021843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.021853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.021861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.021876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.021884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.021893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.021901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.021910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.021917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.021927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.021934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.021943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.021950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.021960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.021967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.021977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.021984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.021993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.022000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.022017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.022038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.022055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.022072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.161 [2024-11-06 10:21:10.022089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.161 [2024-11-06 10:21:10.022492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.161 [2024-11-06 10:21:10.022501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.162 [2024-11-06 10:21:10.022881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.162 [2024-11-06 10:21:10.022900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.162 [2024-11-06 10:21:10.022917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.162 [2024-11-06 10:21:10.022952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99208 len:8 PRP1 0x0 PRP2 0x0 00:29:21.162 [2024-11-06 10:21:10.022960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.162 [2024-11-06 10:21:10.022977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.162 [2024-11-06 10:21:10.022983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99216 len:8 PRP1 0x0 PRP2 0x0 00:29:21.162 [2024-11-06 10:21:10.022991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.022999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.162 [2024-11-06 10:21:10.023004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.162 [2024-11-06 10:21:10.023011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99224 len:8 PRP1 0x0 PRP2 0x0 00:29:21.162 [2024-11-06 10:21:10.023018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.023025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.162 [2024-11-06 10:21:10.023031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.162 [2024-11-06 10:21:10.023037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99232 len:8 PRP1 0x0 PRP2 0x0 00:29:21.162 [2024-11-06 10:21:10.023044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.023052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.162 [2024-11-06 10:21:10.023057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.162 [2024-11-06 10:21:10.023063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99240 len:8 PRP1 0x0 PRP2 0x0 00:29:21.162 [2024-11-06 10:21:10.023071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.023078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.162 [2024-11-06 10:21:10.023084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.162 [2024-11-06 10:21:10.023090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99248 len:8 PRP1 0x0 PRP2 0x0 00:29:21.162 [2024-11-06 10:21:10.023098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.023105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.162 [2024-11-06 10:21:10.023111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.162 [2024-11-06 10:21:10.023117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99256 len:8 PRP1 0x0 PRP2 0x0 00:29:21.162 [2024-11-06 10:21:10.023126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.023133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.162 [2024-11-06 10:21:10.023139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.162 [2024-11-06 10:21:10.023146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99264 len:8 PRP1 0x0 PRP2 0x0 00:29:21.162 [2024-11-06 10:21:10.023153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.023161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.162 [2024-11-06 10:21:10.023167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.162 [2024-11-06 10:21:10.023173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99272 len:8 PRP1 0x0 PRP2 0x0 00:29:21.162 [2024-11-06 10:21:10.023180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.162 [2024-11-06 10:21:10.023188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.162 [2024-11-06 10:21:10.023194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.163 [2024-11-06 10:21:10.023200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99280 len:8 PRP1 0x0 PRP2 0x0 00:29:21.163 [2024-11-06 10:21:10.023207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:10.023215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.163 [2024-11-06 10:21:10.023220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.163 [2024-11-06 10:21:10.023227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99288 len:8 PRP1 0x0 PRP2 0x0 00:29:21.163 [2024-11-06 10:21:10.023234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:10.023242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.163 [2024-11-06 10:21:10.023248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.163 [2024-11-06 10:21:10.023254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99296 len:8 PRP1 0x0 PRP2 0x0 00:29:21.163 [2024-11-06 10:21:10.023261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:10.023268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.163 [2024-11-06 10:21:10.023274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.163 [2024-11-06 10:21:10.023280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99304 len:8 PRP1 0x0 PRP2 0x0 00:29:21.163 [2024-11-06 10:21:10.023287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:10.034372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.163 [2024-11-06 10:21:10.034399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.163 [2024-11-06 10:21:10.034410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99312 len:8 PRP1 0x0 PRP2 0x0 00:29:21.163 [2024-11-06 10:21:10.034419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:10.034471] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:21.163 [2024-11-06 10:21:10.034506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.163 [2024-11-06 10:21:10.034516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:10.034527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.163 [2024-11-06 10:21:10.034535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:10.034543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.163 [2024-11-06 10:21:10.034552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:10.034561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.163 [2024-11-06 10:21:10.034569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:10.034577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:21.163 [2024-11-06 10:21:10.034610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a3d80 (9): Bad file descriptor 00:29:21.163 [2024-11-06 10:21:10.038175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:21.163 [2024-11-06 10:21:10.066810] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:29:21.163 11278.50 IOPS, 44.06 MiB/s [2024-11-06T09:21:24.664Z] 11237.33 IOPS, 43.90 MiB/s [2024-11-06T09:21:24.664Z] 11197.25 IOPS, 43.74 MiB/s [2024-11-06T09:21:24.664Z] [2024-11-06 10:21:13.520062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.163 [2024-11-06 10:21:13.520109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.163 [2024-11-06 10:21:13.520134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.163 [2024-11-06 10:21:13.520430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.163 [2024-11-06 10:21:13.520442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.520989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.520998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.521006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.521016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.521023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.521032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.521040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.521049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.521057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.521066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.521073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.521082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.164 [2024-11-06 10:21:13.521090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.164 [2024-11-06 10:21:13.521101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.165 [2024-11-06 10:21:13.521771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.165 [2024-11-06 10:21:13.521781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.521789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.521806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.521823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.521840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.521856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.521877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.521893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.521910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.521927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.166 [2024-11-06 10:21:13.521947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.166 [2024-11-06 10:21:13.521965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.166 [2024-11-06 10:21:13.521982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.521991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.166 [2024-11-06 10:21:13.521999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.166 [2024-11-06 10:21:13.522015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.166 [2024-11-06 10:21:13.522032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:13.522284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c6d20 is same with the state(6) to be set 00:29:21.166 [2024-11-06 10:21:13.522302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.166 [2024-11-06 10:21:13.522308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.166 [2024-11-06 10:21:13.522315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:8 PRP1 0x0 PRP2 0x0 00:29:21.166 [2024-11-06 10:21:13.522323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522361] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:21.166 [2024-11-06 10:21:13.522384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.166 [2024-11-06 10:21:13.522393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.166 [2024-11-06 10:21:13.522410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.166 [2024-11-06 10:21:13.522427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.166 [2024-11-06 10:21:13.522445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:13.522453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:29:21.166 [2024-11-06 10:21:13.526014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:29:21.166 [2024-11-06 10:21:13.526040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a3d80 (9): Bad file descriptor 00:29:21.166 [2024-11-06 10:21:13.560724] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:29:21.166 11086.00 IOPS, 43.30 MiB/s [2024-11-06T09:21:24.667Z] 11107.33 IOPS, 43.39 MiB/s [2024-11-06T09:21:24.667Z] 11122.14 IOPS, 43.45 MiB/s [2024-11-06T09:21:24.667Z] 11125.12 IOPS, 43.46 MiB/s [2024-11-06T09:21:24.667Z] [2024-11-06 10:21:17.898010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:17.898045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.166 [2024-11-06 10:21:17.898062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.166 [2024-11-06 10:21:17.898070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.167 [2024-11-06 10:21:17.898666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.167 [2024-11-06 10:21:17.898742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.167 [2024-11-06 10:21:17.898749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.168 [2024-11-06 10:21:17.898765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.168 [2024-11-06 10:21:17.898781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.168 [2024-11-06 10:21:17.898798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.898815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.898832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.898850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.898872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.898889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.898905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.898922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.898939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.898955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.898972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.898985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.898992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.168 [2024-11-06 10:21:17.899126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.168 [2024-11-06 10:21:17.899143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.168 [2024-11-06 10:21:17.899159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.168 [2024-11-06 10:21:17.899175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.168 [2024-11-06 10:21:17.899192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.168 [2024-11-06 10:21:17.899208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.168 [2024-11-06 10:21:17.899225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.168 [2024-11-06 10:21:17.899241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.168 [2024-11-06 10:21:17.899335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.168 [2024-11-06 10:21:17.899342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.169 [2024-11-06 10:21:17.899899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.899992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.169 [2024-11-06 10:21:17.899999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.169 [2024-11-06 10:21:17.900009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.170 [2024-11-06 10:21:17.900016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.170 [2024-11-06 10:21:17.900033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-11-06 10:21:17.900049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-11-06 10:21:17.900066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-11-06 10:21:17.900082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-11-06 10:21:17.900099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-11-06 10:21:17.900116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-11-06 10:21:17.900134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-11-06 10:21:17.900150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.170 [2024-11-06 10:21:17.900167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.170 [2024-11-06 10:21:17.900183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:21.170 [2024-11-06 10:21:17.900214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:21.170 [2024-11-06 10:21:17.900221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35304 len:8 PRP1 0x0 PRP2 0x0 00:29:21.170 [2024-11-06 10:21:17.900229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900271] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:21.170 [2024-11-06 10:21:17.900294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.170 [2024-11-06 10:21:17.900302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.170 [2024-11-06 10:21:17.900318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.170 [2024-11-06 10:21:17.900334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.170 [2024-11-06 10:21:17.900349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.170 [2024-11-06 10:21:17.900357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:21.170 [2024-11-06 10:21:17.903906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:21.170 [2024-11-06 10:21:17.903932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a3d80 (9): Bad file descriptor 00:29:21.170 [2024-11-06 10:21:17.971284] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:29:21.170 11100.44 IOPS, 43.36 MiB/s [2024-11-06T09:21:24.671Z] 11148.70 IOPS, 43.55 MiB/s [2024-11-06T09:21:24.671Z] 11158.55 IOPS, 43.59 MiB/s [2024-11-06T09:21:24.671Z] 11202.92 IOPS, 43.76 MiB/s [2024-11-06T09:21:24.671Z] 11208.77 IOPS, 43.78 MiB/s [2024-11-06T09:21:24.671Z] 11206.64 IOPS, 43.78 MiB/s [2024-11-06T09:21:24.671Z] 11220.53 IOPS, 43.83 MiB/s 00:29:21.170 Latency(us) 00:29:21.170 [2024-11-06T09:21:24.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.170 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:21.170 Verification LBA range: start 0x0 length 0x4000 00:29:21.170 NVMe0n1 : 15.01 11217.92 43.82 364.94 0.00 11022.18 512.00 20534.61 00:29:21.170 [2024-11-06T09:21:24.671Z] =================================================================================================================== 00:29:21.170 [2024-11-06T09:21:24.671Z] Total : 11217.92 43.82 364.94 0.00 11022.18 512.00 20534.61 00:29:21.170 Received shutdown signal, test time was about 15.000000 seconds 00:29:21.170 00:29:21.170 Latency(us) 00:29:21.170 [2024-11-06T09:21:24.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.170 [2024-11-06T09:21:24.671Z] =================================================================================================================== 00:29:21.170 [2024-11-06T09:21:24.671Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4019298 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4019298 /var/tmp/bdevperf.sock 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 4019298 ']' 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:21.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:21.170 10:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:21.741 10:21:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:21.741 10:21:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:29:21.741 10:21:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:21.741 [2024-11-06 10:21:25.208714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:21.741 10:21:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:22.002 [2024-11-06 10:21:25.389147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:22.002 10:21:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:22.575 NVMe0n1 00:29:22.575 10:21:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:22.835 00:29:22.835 10:21:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:23.095 00:29:23.095 10:21:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:23.095 10:21:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:23.356 10:21:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:23.356 10:21:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:26.656 10:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:26.656 10:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:26.656 10:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4020337 00:29:26.656 10:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:26.656 10:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 4020337 00:29:27.600 { 00:29:27.600 "results": [ 00:29:27.600 { 00:29:27.600 "job": "NVMe0n1", 00:29:27.600 "core_mask": "0x1", 00:29:27.600 "workload": "verify", 00:29:27.600 "status": "finished", 00:29:27.600 "verify_range": { 00:29:27.600 "start": 0, 00:29:27.600 "length": 16384 00:29:27.600 }, 00:29:27.600 "queue_depth": 128, 00:29:27.600 "io_size": 4096, 00:29:27.600 "runtime": 1.005399, 00:29:27.600 "iops": 12061.87792110396, 00:29:27.600 "mibps": 47.116710629312344, 00:29:27.600 "io_failed": 0, 00:29:27.600 "io_timeout": 0, 00:29:27.600 "avg_latency_us": 10560.417624584261, 00:29:27.600 "min_latency_us": 2635.0933333333332, 00:29:27.600 "max_latency_us": 10540.373333333333 00:29:27.600 } 00:29:27.600 ], 00:29:27.600 "core_count": 1 00:29:27.600 } 00:29:27.600 10:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:27.600 [2024-11-06 10:21:24.256518] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:27.600 [2024-11-06 10:21:24.256575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4019298 ] 00:29:27.600 [2024-11-06 10:21:24.334139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.600 [2024-11-06 10:21:24.368930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.600 [2024-11-06 10:21:26.776507] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:27.600 [2024-11-06 10:21:26.776552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.600 [2024-11-06 10:21:26.776564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.600 [2024-11-06 10:21:26.776574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.600 [2024-11-06 10:21:26.776581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.600 [2024-11-06 10:21:26.776590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.600 [2024-11-06 10:21:26.776597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.600 [2024-11-06 10:21:26.776605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.600 [2024-11-06 10:21:26.776612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.600 [2024-11-06 10:21:26.776620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:29:27.600 [2024-11-06 10:21:26.776646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:29:27.600 [2024-11-06 10:21:26.776661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7ed80 (9): Bad file descriptor 00:29:27.600 [2024-11-06 10:21:26.870080] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:29:27.600 Running I/O for 1 seconds... 00:29:27.600 11999.00 IOPS, 46.87 MiB/s 00:29:27.600 Latency(us) 00:29:27.600 [2024-11-06T09:21:31.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.600 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:27.600 Verification LBA range: start 0x0 length 0x4000 00:29:27.600 NVMe0n1 : 1.01 12061.88 47.12 0.00 0.00 10560.42 2635.09 10540.37 00:29:27.600 [2024-11-06T09:21:31.101Z] =================================================================================================================== 00:29:27.600 [2024-11-06T09:21:31.101Z] Total : 12061.88 47.12 0.00 0.00 10560.42 2635.09 10540.37 00:29:27.600 10:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:27.600 10:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:27.861 10:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:28.122 10:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:28.122 10:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:28.383 10:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:28.383 10:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:31.759 10:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:31.759 10:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 4019298 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 4019298 ']' 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 4019298 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4019298 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4019298' 00:29:31.759 killing process with pid 4019298 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 4019298 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 4019298 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:31.759 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:32.020 rmmod nvme_tcp 00:29:32.020 rmmod nvme_fabrics 00:29:32.020 rmmod nvme_keyring 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 4015157 ']' 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 4015157 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 4015157 ']' 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 4015157 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:32.020 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4015157 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4015157' 00:29:32.281 killing process with pid 4015157 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 4015157 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 4015157 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.281 10:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.828 00:29:34.828 real 0m40.876s 00:29:34.828 user 2m3.902s 00:29:34.828 sys 0m9.041s 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:34.828 ************************************ 00:29:34.828 END TEST nvmf_failover 00:29:34.828 ************************************ 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.828 ************************************ 00:29:34.828 START TEST nvmf_host_discovery 00:29:34.828 ************************************ 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:34.828 * Looking for test storage... 00:29:34.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:34.828 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:34.829 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:34.829 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:34.829 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:29:34.829 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:34.829 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.829 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:34.829 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:29:34.829 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.829 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:29:34.829 10:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:34.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.829 --rc genhtml_branch_coverage=1 00:29:34.829 --rc genhtml_function_coverage=1 00:29:34.829 --rc genhtml_legend=1 00:29:34.829 --rc geninfo_all_blocks=1 00:29:34.829 --rc geninfo_unexecuted_blocks=1 00:29:34.829 00:29:34.829 ' 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:34.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.829 --rc genhtml_branch_coverage=1 00:29:34.829 --rc genhtml_function_coverage=1 00:29:34.829 --rc genhtml_legend=1 00:29:34.829 --rc geninfo_all_blocks=1 00:29:34.829 --rc geninfo_unexecuted_blocks=1 00:29:34.829 00:29:34.829 ' 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:34.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.829 --rc genhtml_branch_coverage=1 00:29:34.829 --rc genhtml_function_coverage=1 00:29:34.829 --rc genhtml_legend=1 00:29:34.829 --rc geninfo_all_blocks=1 00:29:34.829 --rc geninfo_unexecuted_blocks=1 00:29:34.829 00:29:34.829 ' 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:34.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.829 --rc genhtml_branch_coverage=1 00:29:34.829 --rc genhtml_function_coverage=1 00:29:34.829 --rc genhtml_legend=1 00:29:34.829 --rc geninfo_all_blocks=1 00:29:34.829 --rc geninfo_unexecuted_blocks=1 00:29:34.829 00:29:34.829 ' 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:34.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.829 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:34.830 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:34.830 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:34.830 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.830 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.830 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.830 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:34.830 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:34.830 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:29:34.830 10:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:42.977 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:42.977 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:42.977 Found net devices under 0000:31:00.0: cvl_0_0 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:42.977 Found net devices under 0000:31:00.1: cvl_0_1 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:42.977 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:42.978 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.978 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.978 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.978 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.978 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:42.978 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:29:43.239 00:29:43.239 --- 10.0.0.2 ping statistics --- 00:29:43.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.239 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:29:43.239 00:29:43.239 --- 10.0.0.1 ping statistics --- 00:29:43.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.239 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=4026175 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 4026175 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 4026175 ']' 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:43.239 10:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.239 [2024-11-06 10:21:46.686260] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:43.239 [2024-11-06 10:21:46.686327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.502 [2024-11-06 10:21:46.793406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.502 [2024-11-06 10:21:46.842748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.502 [2024-11-06 10:21:46.842800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.502 [2024-11-06 10:21:46.842815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.502 [2024-11-06 10:21:46.842823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.502 [2024-11-06 10:21:46.842829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.502 [2024-11-06 10:21:46.843644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.076 [2024-11-06 10:21:47.544734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.076 [2024-11-06 10:21:47.557013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.076 null0 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.076 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.337 null1 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4026391 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4026391 /tmp/host.sock 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 4026391 ']' 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:44.337 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:44.337 10:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.337 [2024-11-06 10:21:47.653547] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:44.338 [2024-11-06 10:21:47.653611] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026391 ] 00:29:44.338 [2024-11-06 10:21:47.735773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.338 [2024-11-06 10:21:47.777658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.281 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.282 [2024-11-06 10:21:48.768034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:45.282 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:45.544 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.544 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:45.544 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:45.544 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:45.544 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:29:45.545 10:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:29:46.117 [2024-11-06 10:21:49.514021] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:46.117 [2024-11-06 10:21:49.514040] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:46.117 [2024-11-06 10:21:49.514053] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:46.378 [2024-11-06 10:21:49.641477] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:46.378 [2024-11-06 10:21:49.822636] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:46.378 [2024-11-06 10:21:49.823604] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe29650:1 started. 00:29:46.378 [2024-11-06 10:21:49.825210] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:46.378 [2024-11-06 10:21:49.825227] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:46.378 [2024-11-06 10:21:49.832661] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe29650 was disconnected and freed. delete nvme_qpair. 00:29:46.640 10:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:46.640 10:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:46.640 10:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:46.640 10:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:46.640 10:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:46.640 10:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.640 10:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:46.640 10:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.640 10:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:46.640 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:46.902 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:46.903 [2024-11-06 10:21:50.207455] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe299d0:1 started. 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:46.903 [2024-11-06 10:21:50.213555] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe299d0 was disconnected and freed. delete nvme_qpair. 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.903 [2024-11-06 10:21:50.284045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:46.903 [2024-11-06 10:21:50.284603] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:46.903 [2024-11-06 10:21:50.284627] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:46.903 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:46.904 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:47.165 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.165 [2024-11-06 10:21:50.414029] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:47.165 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:47.165 10:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:29:47.165 [2024-11-06 10:21:50.477883] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:29:47.165 [2024-11-06 10:21:50.477921] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:47.165 [2024-11-06 10:21:50.477930] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:47.165 [2024-11-06 10:21:50.477936] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.108 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.109 [2024-11-06 10:21:51.559978] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:48.109 [2024-11-06 10:21:51.560003] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:48.109 [2024-11-06 10:21:51.567514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.109 [2024-11-06 10:21:51.567532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.109 [2024-11-06 10:21:51.567542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.109 [2024-11-06 10:21:51.567549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.109 [2024-11-06 10:21:51.567557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.109 [2024-11-06 10:21:51.567565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.109 [2024-11-06 10:21:51.567573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.109 [2024-11-06 10:21:51.567580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.109 [2024-11-06 10:21:51.567587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf9d90 is same with the state(6) to be set 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:48.109 [2024-11-06 10:21:51.577528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9d90 (9): Bad file descriptor 00:29:48.109 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.109 [2024-11-06 10:21:51.587566] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:48.109 [2024-11-06 10:21:51.587578] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:48.109 [2024-11-06 10:21:51.587583] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:48.109 [2024-11-06 10:21:51.587589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:48.109 [2024-11-06 10:21:51.587606] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:48.109 [2024-11-06 10:21:51.588087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.109 [2024-11-06 10:21:51.588125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf9d90 with addr=10.0.0.2, port=4420 00:29:48.109 [2024-11-06 10:21:51.588137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf9d90 is same with the state(6) to be set 00:29:48.109 [2024-11-06 10:21:51.588158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9d90 (9): Bad file descriptor 00:29:48.109 [2024-11-06 10:21:51.588187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:48.109 [2024-11-06 10:21:51.588196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:48.109 [2024-11-06 10:21:51.588207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:48.109 [2024-11-06 10:21:51.588214] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:48.109 [2024-11-06 10:21:51.588220] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:48.109 [2024-11-06 10:21:51.588225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:48.109 [2024-11-06 10:21:51.597641] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:48.109 [2024-11-06 10:21:51.597655] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:48.109 [2024-11-06 10:21:51.597660] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:48.109 [2024-11-06 10:21:51.597665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:48.109 [2024-11-06 10:21:51.597681] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:48.109 [2024-11-06 10:21:51.598115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.109 [2024-11-06 10:21:51.598153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf9d90 with addr=10.0.0.2, port=4420 00:29:48.109 [2024-11-06 10:21:51.598163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf9d90 is same with the state(6) to be set 00:29:48.109 [2024-11-06 10:21:51.598182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9d90 (9): Bad file descriptor 00:29:48.109 [2024-11-06 10:21:51.598195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:48.109 [2024-11-06 10:21:51.598202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:48.109 [2024-11-06 10:21:51.598210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:48.109 [2024-11-06 10:21:51.598218] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:48.109 [2024-11-06 10:21:51.598223] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:48.109 [2024-11-06 10:21:51.598232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:48.109 [2024-11-06 10:21:51.607715] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:48.109 [2024-11-06 10:21:51.607732] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:48.109 [2024-11-06 10:21:51.607738] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:48.109 [2024-11-06 10:21:51.607742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:48.109 [2024-11-06 10:21:51.607759] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:48.109 [2024-11-06 10:21:51.608098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.109 [2024-11-06 10:21:51.608113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf9d90 with addr=10.0.0.2, port=4420 00:29:48.109 [2024-11-06 10:21:51.608122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf9d90 is same with the state(6) to be set 00:29:48.109 [2024-11-06 10:21:51.608134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9d90 (9): Bad file descriptor 00:29:48.109 [2024-11-06 10:21:51.608144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:48.109 [2024-11-06 10:21:51.608151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:48.109 [2024-11-06 10:21:51.608159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:48.109 [2024-11-06 10:21:51.608165] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:48.109 [2024-11-06 10:21:51.608170] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:48.109 [2024-11-06 10:21:51.608174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:48.370 [2024-11-06 10:21:51.617790] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:48.370 [2024-11-06 10:21:51.617803] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:48.370 [2024-11-06 10:21:51.617808] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:48.370 [2024-11-06 10:21:51.617813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:48.370 [2024-11-06 10:21:51.617827] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:48.370 [2024-11-06 10:21:51.618142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.370 [2024-11-06 10:21:51.618155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf9d90 with addr=10.0.0.2, port=4420 00:29:48.370 [2024-11-06 10:21:51.618162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf9d90 is same with the state(6) to be set 00:29:48.370 [2024-11-06 10:21:51.618173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9d90 (9): Bad file descriptor 00:29:48.370 [2024-11-06 10:21:51.618184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:48.370 [2024-11-06 10:21:51.618190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:48.370 [2024-11-06 10:21:51.618202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:48.370 [2024-11-06 10:21:51.618208] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:48.370 [2024-11-06 10:21:51.618213] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:48.370 [2024-11-06 10:21:51.618217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.370 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.370 [2024-11-06 10:21:51.627859] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:48.370 [2024-11-06 10:21:51.627875] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:48.370 [2024-11-06 10:21:51.627880] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:48.370 [2024-11-06 10:21:51.627885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:48.370 [2024-11-06 10:21:51.627899] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:48.370 [2024-11-06 10:21:51.628230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.370 [2024-11-06 10:21:51.628242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf9d90 with addr=10.0.0.2, port=4420 00:29:48.370 [2024-11-06 10:21:51.628250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf9d90 is same with the state(6) to be set 00:29:48.370 [2024-11-06 10:21:51.628261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9d90 (9): Bad file descriptor 00:29:48.371 [2024-11-06 10:21:51.628272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:48.371 [2024-11-06 10:21:51.628278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:48.371 [2024-11-06 10:21:51.628286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:48.371 [2024-11-06 10:21:51.628292] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:48.371 [2024-11-06 10:21:51.628297] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:48.371 [2024-11-06 10:21:51.628301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:48.371 [2024-11-06 10:21:51.637932] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:48.371 [2024-11-06 10:21:51.637947] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:48.371 [2024-11-06 10:21:51.637960] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:48.371 [2024-11-06 10:21:51.637965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:48.371 [2024-11-06 10:21:51.637982] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:48.371 [2024-11-06 10:21:51.638282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.371 [2024-11-06 10:21:51.638294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf9d90 with addr=10.0.0.2, port=4420 00:29:48.371 [2024-11-06 10:21:51.638302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf9d90 is same with the state(6) to be set 00:29:48.371 [2024-11-06 10:21:51.638313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9d90 (9): Bad file descriptor 00:29:48.371 [2024-11-06 10:21:51.638324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:48.371 [2024-11-06 10:21:51.638331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:48.371 [2024-11-06 10:21:51.638338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:48.371 [2024-11-06 10:21:51.638344] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:48.371 [2024-11-06 10:21:51.638349] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:48.371 [2024-11-06 10:21:51.638353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:48.371 [2024-11-06 10:21:51.647656] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:48.371 [2024-11-06 10:21:51.647675] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.371 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:48.372 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:48.633 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:48.634 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.634 10:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.577 [2024-11-06 10:21:52.940642] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:49.577 [2024-11-06 10:21:52.940664] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:49.577 [2024-11-06 10:21:52.940677] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:49.577 [2024-11-06 10:21:53.027942] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:49.838 [2024-11-06 10:21:53.337580] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:29:49.838 [2024-11-06 10:21:53.338418] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xe286c0:1 started. 00:29:50.100 [2024-11-06 10:21:53.340276] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:50.100 [2024-11-06 10:21:53.340307] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.100 request: 00:29:50.100 { 00:29:50.100 "name": "nvme", 00:29:50.100 "trtype": "tcp", 00:29:50.100 "traddr": "10.0.0.2", 00:29:50.100 "adrfam": "ipv4", 00:29:50.100 "trsvcid": "8009", 00:29:50.100 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:50.100 "wait_for_attach": true, 00:29:50.100 "method": "bdev_nvme_start_discovery", 00:29:50.100 "req_id": 1 00:29:50.100 } 00:29:50.100 Got JSON-RPC error response 00:29:50.100 response: 00:29:50.100 { 00:29:50.100 "code": -17, 00:29:50.100 "message": "File exists" 00:29:50.100 } 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:50.100 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.101 [2024-11-06 10:21:53.383690] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xe286c0 was disconnected and freed. delete nvme_qpair. 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.101 request: 00:29:50.101 { 00:29:50.101 "name": "nvme_second", 00:29:50.101 "trtype": "tcp", 00:29:50.101 "traddr": "10.0.0.2", 00:29:50.101 "adrfam": "ipv4", 00:29:50.101 "trsvcid": "8009", 00:29:50.101 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:50.101 "wait_for_attach": true, 00:29:50.101 "method": "bdev_nvme_start_discovery", 00:29:50.101 "req_id": 1 00:29:50.101 } 00:29:50.101 Got JSON-RPC error response 00:29:50.101 response: 00:29:50.101 { 00:29:50.101 "code": -17, 00:29:50.101 "message": "File exists" 00:29:50.101 } 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.101 10:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.487 [2024-11-06 10:21:54.604149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.487 [2024-11-06 10:21:54.604179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe10e60 with addr=10.0.0.2, port=8010 00:29:51.487 [2024-11-06 10:21:54.604193] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:51.487 [2024-11-06 10:21:54.604201] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:51.487 [2024-11-06 10:21:54.604208] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:52.430 [2024-11-06 10:21:55.606493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.430 [2024-11-06 10:21:55.606517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe10e60 with addr=10.0.0.2, port=8010 00:29:52.430 [2024-11-06 10:21:55.606529] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:52.430 [2024-11-06 10:21:55.606536] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:52.430 [2024-11-06 10:21:55.606543] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:53.375 [2024-11-06 10:21:56.608492] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:53.375 request: 00:29:53.375 { 00:29:53.375 "name": "nvme_second", 00:29:53.375 "trtype": "tcp", 00:29:53.375 "traddr": "10.0.0.2", 00:29:53.375 "adrfam": "ipv4", 00:29:53.375 "trsvcid": "8010", 00:29:53.375 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:53.375 "wait_for_attach": false, 00:29:53.375 "attach_timeout_ms": 3000, 00:29:53.375 "method": "bdev_nvme_start_discovery", 00:29:53.375 "req_id": 1 00:29:53.375 } 00:29:53.375 Got JSON-RPC error response 00:29:53.375 response: 00:29:53.375 { 00:29:53.375 "code": -110, 00:29:53.375 "message": "Connection timed out" 00:29:53.375 } 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4026391 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:53.375 rmmod nvme_tcp 00:29:53.375 rmmod nvme_fabrics 00:29:53.375 rmmod nvme_keyring 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 4026175 ']' 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 4026175 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 4026175 ']' 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 4026175 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4026175 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4026175' 00:29:53.375 killing process with pid 4026175 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 4026175 00:29:53.375 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 4026175 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.637 10:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.553 10:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:55.553 00:29:55.553 real 0m21.186s 00:29:55.553 user 0m23.514s 00:29:55.553 sys 0m7.826s 00:29:55.553 10:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:55.553 10:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.553 ************************************ 00:29:55.553 END TEST nvmf_host_discovery 00:29:55.553 ************************************ 00:29:55.553 10:21:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:55.553 10:21:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:55.553 10:21:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:55.553 10:21:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.816 ************************************ 00:29:55.816 START TEST nvmf_host_multipath_status 00:29:55.816 ************************************ 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:55.816 * Looking for test storage... 00:29:55.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.816 --rc genhtml_branch_coverage=1 00:29:55.816 --rc genhtml_function_coverage=1 00:29:55.816 --rc genhtml_legend=1 00:29:55.816 --rc geninfo_all_blocks=1 00:29:55.816 --rc geninfo_unexecuted_blocks=1 00:29:55.816 00:29:55.816 ' 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.816 --rc genhtml_branch_coverage=1 00:29:55.816 --rc genhtml_function_coverage=1 00:29:55.816 --rc genhtml_legend=1 00:29:55.816 --rc geninfo_all_blocks=1 00:29:55.816 --rc geninfo_unexecuted_blocks=1 00:29:55.816 00:29:55.816 ' 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.816 --rc genhtml_branch_coverage=1 00:29:55.816 --rc genhtml_function_coverage=1 00:29:55.816 --rc genhtml_legend=1 00:29:55.816 --rc geninfo_all_blocks=1 00:29:55.816 --rc geninfo_unexecuted_blocks=1 00:29:55.816 00:29:55.816 ' 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.816 --rc genhtml_branch_coverage=1 00:29:55.816 --rc genhtml_function_coverage=1 00:29:55.816 --rc genhtml_legend=1 00:29:55.816 --rc geninfo_all_blocks=1 00:29:55.816 --rc geninfo_unexecuted_blocks=1 00:29:55.816 00:29:55.816 ' 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.816 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:56.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:29:56.079 10:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:04.226 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:04.226 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:04.226 Found net devices under 0000:31:00.0: cvl_0_0 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:04.226 Found net devices under 0000:31:00.1: cvl_0_1 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.226 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:04.227 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.488 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.488 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.488 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:04.488 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:04.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:30:04.489 00:30:04.489 --- 10.0.0.2 ping statistics --- 00:30:04.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.489 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:30:04.489 00:30:04.489 --- 10.0.0.1 ping statistics --- 00:30:04.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.489 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=4032998 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 4032998 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 4032998 ']' 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:04.489 10:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:04.489 [2024-11-06 10:22:07.931918] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:04.489 [2024-11-06 10:22:07.931983] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.750 [2024-11-06 10:22:08.025809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:04.750 [2024-11-06 10:22:08.067418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.750 [2024-11-06 10:22:08.067455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.750 [2024-11-06 10:22:08.067464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.750 [2024-11-06 10:22:08.067470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.750 [2024-11-06 10:22:08.067476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.750 [2024-11-06 10:22:08.068640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.750 [2024-11-06 10:22:08.068641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.321 10:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:05.321 10:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:30:05.321 10:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.321 10:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:05.321 10:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:05.321 10:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.321 10:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4032998 00:30:05.321 10:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:05.582 [2024-11-06 10:22:08.926691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.582 10:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:05.842 Malloc0 00:30:05.842 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:05.842 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.102 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.363 [2024-11-06 10:22:09.605143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.363 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:06.363 [2024-11-06 10:22:09.757475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:06.364 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:06.364 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4033422 00:30:06.364 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:06.364 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4033422 /var/tmp/bdevperf.sock 00:30:06.364 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 4033422 ']' 00:30:06.364 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:06.364 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:06.364 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:06.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:06.364 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:06.364 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:06.623 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:06.623 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:30:06.623 10:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:06.883 10:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:07.142 Nvme0n1 00:30:07.142 10:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:07.403 Nvme0n1 00:30:07.403 10:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:07.403 10:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:09.316 10:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:09.316 10:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:09.576 10:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:09.837 10:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:10.779 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:10.779 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:10.779 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.779 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:11.039 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:11.039 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:11.039 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:11.039 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:11.300 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:11.300 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:11.300 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:11.300 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:11.300 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:11.300 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:11.300 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:11.300 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:11.562 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:11.562 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:11.562 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:11.562 10:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:11.823 10:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:11.823 10:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:11.823 10:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:11.823 10:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:11.823 10:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:11.823 10:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:11.823 10:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:12.085 10:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:12.345 10:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:13.288 10:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:13.288 10:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:13.288 10:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.288 10:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:13.550 10:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:13.550 10:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:13.550 10:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.550 10:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:13.550 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.550 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:13.550 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.550 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:13.811 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.811 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:13.811 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.811 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:14.072 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.072 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:14.072 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.072 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:14.333 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.333 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:14.333 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.333 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:14.333 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.333 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:14.333 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:14.594 10:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:14.855 10:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:15.797 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:15.797 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:15.797 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.797 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:16.058 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.058 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:16.058 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.058 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:16.058 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:16.058 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:16.058 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.058 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:16.317 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.317 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:16.317 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.317 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:16.578 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.578 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:16.578 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.578 10:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:16.578 10:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.578 10:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:16.578 10:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.578 10:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:16.839 10:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.839 10:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:16.839 10:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:17.099 10:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:17.099 10:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:18.484 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:18.484 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:18.484 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.484 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:18.484 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.484 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:18.485 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.485 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:18.485 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:18.485 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:18.745 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.745 10:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:18.745 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.745 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:18.745 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.745 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:19.006 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.006 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:19.006 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.006 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:19.267 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.267 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:19.267 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.267 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:19.267 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:19.267 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:19.267 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:19.526 10:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:19.787 10:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:20.730 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:20.730 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:20.730 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.730 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:20.992 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:20.992 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:20.992 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.992 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:20.992 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:20.992 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:20.992 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.992 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:21.252 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.252 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:21.253 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.253 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:21.512 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.512 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:21.513 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.513 10:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:21.823 10:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:21.823 10:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:21.823 10:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.823 10:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:21.823 10:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:21.823 10:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:21.823 10:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:22.173 10:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:22.173 10:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:23.115 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:23.115 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:23.115 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.115 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:23.376 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:23.376 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:23.376 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.376 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:23.637 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.637 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:23.637 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.637 10:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:23.637 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.637 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:23.637 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.637 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:23.898 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.898 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:23.898 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.898 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:24.159 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:24.159 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:24.159 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.159 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:24.418 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.418 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:24.418 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:24.418 10:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:24.678 10:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:24.937 10:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:25.875 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:25.875 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:25.875 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.875 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:26.136 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.136 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:26.136 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.136 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:26.136 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.136 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:26.136 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.136 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:26.396 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.396 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:26.396 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.396 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:26.656 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.656 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:26.656 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:26.656 10:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.916 10:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.916 10:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:26.916 10:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.916 10:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:26.916 10:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.916 10:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:26.916 10:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:27.178 10:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:27.438 10:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:28.382 10:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:28.382 10:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:28.382 10:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.382 10:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:28.642 10:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:28.642 10:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:28.642 10:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.642 10:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:28.642 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.642 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:28.642 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.642 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:28.903 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.903 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:28.903 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.903 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:29.163 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.163 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:29.163 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.163 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:29.163 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.163 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:29.163 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.163 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:29.423 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.423 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:29.423 10:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:29.683 10:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:29.944 10:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:30.885 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:30.885 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:30.885 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.885 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:31.146 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.146 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:31.146 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.146 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:31.146 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.146 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:31.146 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.146 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:31.406 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.406 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:31.406 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.406 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:31.666 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.666 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:31.666 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.666 10:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:31.666 10:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.666 10:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:31.666 10:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.666 10:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:31.927 10:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.927 10:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:31.927 10:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:32.187 10:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:32.447 10:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:33.388 10:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:33.388 10:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:33.388 10:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.388 10:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:33.648 10:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.648 10:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:33.648 10:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.648 10:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:33.648 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:33.648 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:33.648 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.648 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:33.909 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.909 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:33.909 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.909 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:34.169 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.169 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:34.169 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.169 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:34.169 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.169 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:34.169 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.169 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4033422 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 4033422 ']' 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 4033422 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4033422 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4033422' 00:30:34.430 killing process with pid 4033422 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 4033422 00:30:34.430 10:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 4033422 00:30:34.430 { 00:30:34.430 "results": [ 00:30:34.430 { 00:30:34.430 "job": "Nvme0n1", 00:30:34.430 "core_mask": "0x4", 00:30:34.430 "workload": "verify", 00:30:34.430 "status": "terminated", 00:30:34.430 "verify_range": { 00:30:34.430 "start": 0, 00:30:34.430 "length": 16384 00:30:34.430 }, 00:30:34.430 "queue_depth": 128, 00:30:34.430 "io_size": 4096, 00:30:34.430 "runtime": 26.977541, 00:30:34.430 "iops": 10818.814064632503, 00:30:34.430 "mibps": 42.260992439970714, 00:30:34.430 "io_failed": 0, 00:30:34.430 "io_timeout": 0, 00:30:34.430 "avg_latency_us": 11811.313314123538, 00:30:34.430 "min_latency_us": 308.9066666666667, 00:30:34.430 "max_latency_us": 3075822.933333333 00:30:34.430 } 00:30:34.430 ], 00:30:34.430 "core_count": 1 00:30:34.430 } 00:30:34.709 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4033422 00:30:34.709 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:34.709 [2024-11-06 10:22:09.795742] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:34.710 [2024-11-06 10:22:09.795787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033422 ] 00:30:34.710 [2024-11-06 10:22:09.852121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.710 [2024-11-06 10:22:09.880871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.710 Running I/O for 90 seconds... 00:30:34.710 9541.00 IOPS, 37.27 MiB/s [2024-11-06T09:22:38.211Z] 9612.00 IOPS, 37.55 MiB/s [2024-11-06T09:22:38.211Z] 9635.33 IOPS, 37.64 MiB/s [2024-11-06T09:22:38.211Z] 9646.25 IOPS, 37.68 MiB/s [2024-11-06T09:22:38.211Z] 9891.00 IOPS, 38.64 MiB/s [2024-11-06T09:22:38.211Z] 10434.67 IOPS, 40.76 MiB/s [2024-11-06T09:22:38.211Z] 10821.14 IOPS, 42.27 MiB/s [2024-11-06T09:22:38.211Z] 10804.50 IOPS, 42.21 MiB/s [2024-11-06T09:22:38.211Z] 10679.78 IOPS, 41.72 MiB/s [2024-11-06T09:22:38.211Z] 10580.10 IOPS, 41.33 MiB/s [2024-11-06T09:22:38.211Z] 10489.27 IOPS, 40.97 MiB/s [2024-11-06T09:22:38.211Z] [2024-11-06 10:22:22.895350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.895385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.895403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.895409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.895420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.895426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.895436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.895442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.895855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.895870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.895882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.895887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.895898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.895903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.895914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.895919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.895929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.710 [2024-11-06 10:22:22.895935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.896931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.896936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.897319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.710 [2024-11-06 10:22:22.897328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.897339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.710 [2024-11-06 10:22:22.897345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:34.710 [2024-11-06 10:22:22.897358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.711 [2024-11-06 10:22:22.897363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.711 [2024-11-06 10:22:22.897379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.711 [2024-11-06 10:22:22.897394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.711 [2024-11-06 10:22:22.897410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.711 [2024-11-06 10:22:22.897425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.711 [2024-11-06 10:22:22.897441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.897782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.897787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.898033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.898041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.898052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.898057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.898068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.898073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.898083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.898088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.898098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.898103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.898113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.898119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.898129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.898134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.898144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.898149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.898159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.711 [2024-11-06 10:22:22.898165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:34.711 [2024-11-06 10:22:22.898176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.898992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.898999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.899009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.899015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.899025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.899030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.899041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.899046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.899056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.899061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.712 [2024-11-06 10:22:22.899072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.712 [2024-11-06 10:22:22.899081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.899991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.899996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.713 [2024-11-06 10:22:22.900269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:34.713 [2024-11-06 10:22:22.900580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.713 [2024-11-06 10:22:22.900585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.900961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.900966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.901363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.901379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.714 [2024-11-06 10:22:22.901395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.714 [2024-11-06 10:22:22.901411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.714 [2024-11-06 10:22:22.901427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.714 [2024-11-06 10:22:22.901443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.714 [2024-11-06 10:22:22.901458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.714 [2024-11-06 10:22:22.901474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.714 [2024-11-06 10:22:22.901489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.901505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.901522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.901537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.901553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.901569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.901580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.901585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.903499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.903507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.903519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.903524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.903534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.903539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.903549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.714 [2024-11-06 10:22:22.903555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:34.714 [2024-11-06 10:22:22.903565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.903991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.903996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.904192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.904197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.915655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.915676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.915688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.915693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.915703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.915708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.915719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.915724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.915734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.915739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.915749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.915755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.915765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.715 [2024-11-06 10:22:22.915770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:34.715 [2024-11-06 10:22:22.915780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.915991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.915996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.916006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.916011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.916021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.916026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.916036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.916041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.916052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.916057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.916067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.916073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.916083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.916088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.916099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.916104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.917991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.917997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:34.716 [2024-11-06 10:22:22.918007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.716 [2024-11-06 10:22:22.918012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.717 [2024-11-06 10:22:22.918134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.717 [2024-11-06 10:22:22.918504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.717 [2024-11-06 10:22:22.918520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.717 [2024-11-06 10:22:22.918535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.717 [2024-11-06 10:22:22.918553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.717 [2024-11-06 10:22:22.918569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.717 [2024-11-06 10:22:22.918584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.717 [2024-11-06 10:22:22.918599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:34.717 [2024-11-06 10:22:22.918609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.718 [2024-11-06 10:22:22.918614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.918952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.918958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:34.718 10406.08 IOPS, 40.65 MiB/s [2024-11-06T09:22:38.219Z] [2024-11-06 10:22:22.919586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:34.718 [2024-11-06 10:22:22.919814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.718 [2024-11-06 10:22:22.919819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.919829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.919835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.919845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.919850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.919860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.919870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.919881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.919886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.919896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.919901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.919911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.919916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.919926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.919931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.919942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.919948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.919959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.919964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.919974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.919979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.919989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.919994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.719 [2024-11-06 10:22:22.920740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:34.719 [2024-11-06 10:22:22.920750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.920990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.920995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.921005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.921011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.921021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.921026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.921036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.921041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.921052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.921057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.921067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.927728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.927763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.927775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.927787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.927792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.927803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.927808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.927819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.720 [2024-11-06 10:22:22.927824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.927835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.927840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.927850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.927856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.927873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.927879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.927890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.927895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.928200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.928210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.928223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.928229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.928239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.928245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.928256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.928262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.928272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.928280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.928292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.928297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.928307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.928313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.928323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.928328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.928339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.928344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.928354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.928359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:34.720 [2024-11-06 10:22:22.928369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.720 [2024-11-06 10:22:22.928375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.721 [2024-11-06 10:22:22.928538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.721 [2024-11-06 10:22:22.928556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.721 [2024-11-06 10:22:22.928573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.721 [2024-11-06 10:22:22.928591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.721 [2024-11-06 10:22:22.928608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.721 [2024-11-06 10:22:22.928624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.721 [2024-11-06 10:22:22.928639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.928985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.928991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:34.721 [2024-11-06 10:22:22.929001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.721 [2024-11-06 10:22:22.929006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:34.722 [2024-11-06 10:22:22.929631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.722 [2024-11-06 10:22:22.929636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.929646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.929651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.929662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.929668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.929679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.929684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.929694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.929699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.929710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.929717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.929728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.929735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.723 [2024-11-06 10:22:22.930901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:34.723 [2024-11-06 10:22:22.930911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.724 [2024-11-06 10:22:22.930916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.930927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.930932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.930942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.930947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.930958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.930963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.724 [2024-11-06 10:22:22.931830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.724 [2024-11-06 10:22:22.931845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.724 [2024-11-06 10:22:22.931865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.724 [2024-11-06 10:22:22.931883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.724 [2024-11-06 10:22:22.931898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.724 [2024-11-06 10:22:22.931914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.724 [2024-11-06 10:22:22.931929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.931986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.931991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.932001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.932007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.932017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.932022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.932032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.724 [2024-11-06 10:22:22.932037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:34.724 [2024-11-06 10:22:22.932048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.932803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.932996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.725 [2024-11-06 10:22:22.933367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:34.725 [2024-11-06 10:22:22.933377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.933990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.933996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.934635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.934645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.939133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.939168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.939176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.939302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.939314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.939327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.939332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.939343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.726 [2024-11-06 10:22:22.939348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:34.726 [2024-11-06 10:22:22.939358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.727 [2024-11-06 10:22:22.939536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:34.727 [2024-11-06 10:22:22.939857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.727 [2024-11-06 10:22:22.939869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.939879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.939884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.939896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.939901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.939912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.939917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.939927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.728 [2024-11-06 10:22:22.939932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.939942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.728 [2024-11-06 10:22:22.939948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.939958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.728 [2024-11-06 10:22:22.939963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.939973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.728 [2024-11-06 10:22:22.939979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.939989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.728 [2024-11-06 10:22:22.939994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.728 [2024-11-06 10:22:22.940012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.728 [2024-11-06 10:22:22.940027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.728 [2024-11-06 10:22:22.940383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:34.728 [2024-11-06 10:22:22.940393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.729 [2024-11-06 10:22:22.940962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:34.729 [2024-11-06 10:22:22.940972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.940977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.940987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.940993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.941173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.941178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.730 [2024-11-06 10:22:22.942527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:34.730 [2024-11-06 10:22:22.942537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.731 [2024-11-06 10:22:22.942543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.942993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.942999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.943014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.943390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.943406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.943422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.943437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.943453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.943468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.943484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.943499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.731 [2024-11-06 10:22:22.943515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.731 [2024-11-06 10:22:22.943530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.731 [2024-11-06 10:22:22.943546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.731 [2024-11-06 10:22:22.943561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.731 [2024-11-06 10:22:22.943577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.731 [2024-11-06 10:22:22.943594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.731 [2024-11-06 10:22:22.943610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.943740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:34.731 [2024-11-06 10:22:22.943751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.731 [2024-11-06 10:22:22.943757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.943767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.943772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.943783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.943788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.943798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.943804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.943814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.943819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.943829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.943835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.943845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.943850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.943961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.943968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.943980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.943986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.943998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.732 [2024-11-06 10:22:22.944741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:34.732 [2024-11-06 10:22:22.944920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.944928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.944939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.944944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.944954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.944963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.944973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.944979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.944989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.944994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.945968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.945973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.946190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.946197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.946209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-06 10:22:22.946215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:34.733 [2024-11-06 10:22:22.946225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.946887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.946892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.734 [2024-11-06 10:22:22.947246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-06 10:22:22.947555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:34.734 [2024-11-06 10:22:22.947751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.947758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.947769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.947775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.947785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.947790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.947802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.947807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.947818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.947823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.947833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.947838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.947848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.947854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.947868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.947874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.948263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.948279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.948295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.948310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.948326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.948341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.948357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.948373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.735 [2024-11-06 10:22:22.948389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.735 [2024-11-06 10:22:22.948404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.735 [2024-11-06 10:22:22.948420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.735 [2024-11-06 10:22:22.948435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.735 [2024-11-06 10:22:22.948451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.735 [2024-11-06 10:22:22.948466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.948477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.735 [2024-11-06 10:22:22.948482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:34.735 [2024-11-06 10:22:22.950588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.735 [2024-11-06 10:22:22.950593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.950993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.950998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:34.736 [2024-11-06 10:22:22.951272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.736 [2024-11-06 10:22:22.951278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:34.737 [2024-11-06 10:22:22.951976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.737 [2024-11-06 10:22:22.951981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.951995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.738 [2024-11-06 10:22:22.952135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.952980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.952985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.953001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.953006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.953021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.953026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.953042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.738 [2024-11-06 10:22:22.953048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.953064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.738 [2024-11-06 10:22:22.953069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.953085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.738 [2024-11-06 10:22:22.953090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.953106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.738 [2024-11-06 10:22:22.953111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.953127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.738 [2024-11-06 10:22:22.953132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.953147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.738 [2024-11-06 10:22:22.953153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.953168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.738 [2024-11-06 10:22:22.953174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:34.738 [2024-11-06 10:22:22.953191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.738 [2024-11-06 10:22:22.953196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:34.738 9605.62 IOPS, 37.52 MiB/s [2024-11-06T09:22:38.239Z] 8919.50 IOPS, 34.84 MiB/s [2024-11-06T09:22:38.240Z] 8324.87 IOPS, 32.52 MiB/s [2024-11-06T09:22:38.240Z] 8583.38 IOPS, 33.53 MiB/s [2024-11-06T09:22:38.240Z] 8834.53 IOPS, 34.51 MiB/s [2024-11-06T09:22:38.240Z] 9256.11 IOPS, 36.16 MiB/s [2024-11-06T09:22:38.240Z] 9661.11 IOPS, 37.74 MiB/s [2024-11-06T09:22:38.240Z] 9962.65 IOPS, 38.92 MiB/s [2024-11-06T09:22:38.240Z] 10099.24 IOPS, 39.45 MiB/s [2024-11-06T09:22:38.240Z] 10225.82 IOPS, 39.94 MiB/s [2024-11-06T09:22:38.240Z] 10467.26 IOPS, 40.89 MiB/s [2024-11-06T09:22:38.240Z] 10737.00 IOPS, 41.94 MiB/s [2024-11-06T09:22:38.240Z] [2024-11-06 10:22:35.695321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.739 [2024-11-06 10:22:35.695359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.695395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.739 [2024-11-06 10:22:35.695412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.739 [2024-11-06 10:22:35.695427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.739 [2024-11-06 10:22:35.695443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.739 [2024-11-06 10:22:35.695458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.739 [2024-11-06 10:22:35.695474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.695489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.695505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.695520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.695541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.695744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.695761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.695777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.695793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.695952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.739 [2024-11-06 10:22:35.695969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.739 [2024-11-06 10:22:35.695985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.695995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.696000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.696010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.696015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.696026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.696031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:34.739 [2024-11-06 10:22:35.696362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.739 [2024-11-06 10:22:35.696371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:34.739 10909.40 IOPS, 42.61 MiB/s [2024-11-06T09:22:38.240Z] 10863.04 IOPS, 42.43 MiB/s [2024-11-06T09:22:38.240Z] Received shutdown signal, test time was about 26.978154 seconds 00:30:34.739 00:30:34.739 Latency(us) 00:30:34.739 [2024-11-06T09:22:38.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.739 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:34.739 Verification LBA range: start 0x0 length 0x4000 00:30:34.739 Nvme0n1 : 26.98 10818.81 42.26 0.00 0.00 11811.31 308.91 3075822.93 00:30:34.739 [2024-11-06T09:22:38.240Z] =================================================================================================================== 00:30:34.739 [2024-11-06T09:22:38.240Z] Total : 10818.81 42.26 0.00 0.00 11811.31 308.91 3075822.93 00:30:34.739 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.000 rmmod nvme_tcp 00:30:35.000 rmmod nvme_fabrics 00:30:35.000 rmmod nvme_keyring 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 4032998 ']' 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 4032998 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 4032998 ']' 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 4032998 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4032998 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4032998' 00:30:35.000 killing process with pid 4032998 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 4032998 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 4032998 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.000 10:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.545 10:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:37.545 00:30:37.545 real 0m41.460s 00:30:37.545 user 1m44.324s 00:30:37.545 sys 0m12.435s 00:30:37.545 10:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:37.545 10:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:37.545 ************************************ 00:30:37.545 END TEST nvmf_host_multipath_status 00:30:37.545 ************************************ 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.546 ************************************ 00:30:37.546 START TEST nvmf_discovery_remove_ifc 00:30:37.546 ************************************ 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:37.546 * Looking for test storage... 00:30:37.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:37.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.546 --rc genhtml_branch_coverage=1 00:30:37.546 --rc genhtml_function_coverage=1 00:30:37.546 --rc genhtml_legend=1 00:30:37.546 --rc geninfo_all_blocks=1 00:30:37.546 --rc geninfo_unexecuted_blocks=1 00:30:37.546 00:30:37.546 ' 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:37.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.546 --rc genhtml_branch_coverage=1 00:30:37.546 --rc genhtml_function_coverage=1 00:30:37.546 --rc genhtml_legend=1 00:30:37.546 --rc geninfo_all_blocks=1 00:30:37.546 --rc geninfo_unexecuted_blocks=1 00:30:37.546 00:30:37.546 ' 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:37.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.546 --rc genhtml_branch_coverage=1 00:30:37.546 --rc genhtml_function_coverage=1 00:30:37.546 --rc genhtml_legend=1 00:30:37.546 --rc geninfo_all_blocks=1 00:30:37.546 --rc geninfo_unexecuted_blocks=1 00:30:37.546 00:30:37.546 ' 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:37.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.546 --rc genhtml_branch_coverage=1 00:30:37.546 --rc genhtml_function_coverage=1 00:30:37.546 --rc genhtml_legend=1 00:30:37.546 --rc geninfo_all_blocks=1 00:30:37.546 --rc geninfo_unexecuted_blocks=1 00:30:37.546 00:30:37.546 ' 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:37.546 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:37.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:30:37.547 10:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.692 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:45.693 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:45.693 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:45.693 Found net devices under 0000:31:00.0: cvl_0_0 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.693 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:45.694 Found net devices under 0000:31:00.1: cvl_0_1 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:30:45.694 00:30:45.694 --- 10.0.0.2 ping statistics --- 00:30:45.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.694 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:30:45.694 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:30:45.694 00:30:45.694 --- 10.0.0.1 ping statistics --- 00:30:45.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.694 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:45.695 10:22:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.695 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=4043842 00:30:45.695 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 4043842 00:30:45.695 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:45.695 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 4043842 ']' 00:30:45.695 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.695 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:45.695 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.695 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:45.695 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.695 [2024-11-06 10:22:49.056301] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:45.695 [2024-11-06 10:22:49.056357] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.695 [2024-11-06 10:22:49.133105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.695 [2024-11-06 10:22:49.177828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.695 [2024-11-06 10:22:49.177891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.695 [2024-11-06 10:22:49.177899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.695 [2024-11-06 10:22:49.177907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.695 [2024-11-06 10:22:49.177912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.695 [2024-11-06 10:22:49.178616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.958 [2024-11-06 10:22:49.332822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.958 [2024-11-06 10:22:49.341133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:45.958 null0 00:30:45.958 [2024-11-06 10:22:49.373063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4043871 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4043871 /tmp/host.sock 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 4043871 ']' 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:45.958 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.958 10:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:45.958 [2024-11-06 10:22:49.450391] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:45.958 [2024-11-06 10:22:49.450455] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043871 ] 00:30:46.218 [2024-11-06 10:22:49.532829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.218 [2024-11-06 10:22:49.574330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.789 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:46.789 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:30:46.789 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:46.789 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:46.789 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.789 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:46.789 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.789 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:46.789 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.789 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:47.050 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.050 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:47.050 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.050 10:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:47.990 [2024-11-06 10:22:51.382855] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:47.990 [2024-11-06 10:22:51.382878] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:47.990 [2024-11-06 10:22:51.382892] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:48.251 [2024-11-06 10:22:51.511360] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:48.251 [2024-11-06 10:22:51.692493] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:30:48.251 [2024-11-06 10:22:51.693518] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1e7e670:1 started. 00:30:48.251 [2024-11-06 10:22:51.695108] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:48.251 [2024-11-06 10:22:51.695156] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:48.251 [2024-11-06 10:22:51.695176] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:48.251 [2024-11-06 10:22:51.695190] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:48.251 [2024-11-06 10:22:51.695210] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:48.251 [2024-11-06 10:22:51.700686] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1e7e670 was disconnected and freed. delete nvme_qpair. 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:48.251 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:48.513 10:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:49.455 10:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:49.455 10:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:49.455 10:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:49.455 10:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.455 10:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:49.455 10:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.455 10:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:49.455 10:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.716 10:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:49.716 10:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:50.659 10:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:50.659 10:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:50.659 10:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.659 10:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:50.659 10:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.659 10:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:50.659 10:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:50.659 10:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.659 10:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:50.659 10:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:51.601 10:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:51.601 10:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.601 10:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:51.601 10:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.601 10:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:51.601 10:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.601 10:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:51.601 10:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.601 10:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:51.601 10:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:52.987 10:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:52.987 10:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:52.987 10:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:52.987 10:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.987 10:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:52.987 10:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:52.987 10:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:52.987 10:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.987 10:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:52.987 10:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:53.928 10:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:53.928 [2024-11-06 10:22:57.135778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:53.928 [2024-11-06 10:22:57.135820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.928 [2024-11-06 10:22:57.135838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.928 [2024-11-06 10:22:57.135848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.928 [2024-11-06 10:22:57.135855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.928 [2024-11-06 10:22:57.135868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.928 [2024-11-06 10:22:57.135876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.928 [2024-11-06 10:22:57.135884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.928 [2024-11-06 10:22:57.135891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.928 [2024-11-06 10:22:57.135900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.928 [2024-11-06 10:22:57.135907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.928 [2024-11-06 10:22:57.135915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5b050 is same with the state(6) to be set 00:30:53.928 10:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.928 10:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:53.928 10:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.928 10:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:53.928 10:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:53.928 10:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:53.928 [2024-11-06 10:22:57.145800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5b050 (9): Bad file descriptor 00:30:53.928 [2024-11-06 10:22:57.155839] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:53.928 [2024-11-06 10:22:57.155852] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:53.928 [2024-11-06 10:22:57.155857] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:53.928 [2024-11-06 10:22:57.155865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:53.928 [2024-11-06 10:22:57.155886] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:53.928 10:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.928 10:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:53.928 10:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:54.869 10:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:54.869 10:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.869 10:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:54.869 10:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.869 10:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:54.869 10:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:54.869 10:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:54.869 [2024-11-06 10:22:58.203914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:54.869 [2024-11-06 10:22:58.203960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5b050 with addr=10.0.0.2, port=4420 00:30:54.869 [2024-11-06 10:22:58.203975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5b050 is same with the state(6) to be set 00:30:54.869 [2024-11-06 10:22:58.204003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5b050 (9): Bad file descriptor 00:30:54.869 [2024-11-06 10:22:58.204389] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:30:54.869 [2024-11-06 10:22:58.204415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:54.869 [2024-11-06 10:22:58.204423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:54.869 [2024-11-06 10:22:58.204433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:54.869 [2024-11-06 10:22:58.204440] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:54.869 [2024-11-06 10:22:58.204446] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:54.869 [2024-11-06 10:22:58.204452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:54.869 [2024-11-06 10:22:58.204460] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:54.869 [2024-11-06 10:22:58.204466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:54.869 10:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.869 10:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:54.869 10:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:55.814 [2024-11-06 10:22:59.206837] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:55.814 [2024-11-06 10:22:59.206857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:55.814 [2024-11-06 10:22:59.206873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:55.814 [2024-11-06 10:22:59.206881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:55.814 [2024-11-06 10:22:59.206889] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:30:55.814 [2024-11-06 10:22:59.206896] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:55.814 [2024-11-06 10:22:59.206902] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:55.814 [2024-11-06 10:22:59.206906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:55.814 [2024-11-06 10:22:59.206928] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:55.814 [2024-11-06 10:22:59.206950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.814 [2024-11-06 10:22:59.206961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.814 [2024-11-06 10:22:59.206971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.814 [2024-11-06 10:22:59.206985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.814 [2024-11-06 10:22:59.206994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.814 [2024-11-06 10:22:59.207002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.814 [2024-11-06 10:22:59.207010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.814 [2024-11-06 10:22:59.207018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.814 [2024-11-06 10:22:59.207026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.814 [2024-11-06 10:22:59.207034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.814 [2024-11-06 10:22:59.207041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:30:55.814 [2024-11-06 10:22:59.207346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4a380 (9): Bad file descriptor 00:30:55.814 [2024-11-06 10:22:59.208359] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:55.814 [2024-11-06 10:22:59.208371] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:30:55.814 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:55.814 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:55.814 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:55.814 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:55.814 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.814 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.814 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:55.814 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.814 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:55.814 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.814 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.076 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:56.076 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:56.076 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.076 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:56.076 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.076 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:56.076 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:56.076 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:56.076 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.076 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:56.076 10:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:57.030 10:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:57.030 10:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.030 10:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:57.030 10:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.030 10:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:57.030 10:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:57.030 10:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:57.030 10:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.030 10:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:57.030 10:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:57.971 [2024-11-06 10:23:01.266742] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:57.971 [2024-11-06 10:23:01.266761] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:57.971 [2024-11-06 10:23:01.266775] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:57.971 [2024-11-06 10:23:01.394206] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:58.232 10:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:58.232 10:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:58.232 10:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:58.232 10:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.232 10:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:58.232 10:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:58.232 10:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:58.232 10:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.232 10:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:58.232 10:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:58.232 [2024-11-06 10:23:01.577311] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:30:58.232 [2024-11-06 10:23:01.578278] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1e659b0:1 started. 00:30:58.232 [2024-11-06 10:23:01.579501] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:58.232 [2024-11-06 10:23:01.579537] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:58.232 [2024-11-06 10:23:01.579556] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:58.232 [2024-11-06 10:23:01.579571] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:58.232 [2024-11-06 10:23:01.579579] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:58.232 [2024-11-06 10:23:01.584935] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1e659b0 was disconnected and freed. delete nvme_qpair. 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4043871 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 4043871 ']' 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 4043871 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:59.174 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4043871 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4043871' 00:30:59.434 killing process with pid 4043871 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 4043871 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 4043871 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:59.434 rmmod nvme_tcp 00:30:59.434 rmmod nvme_fabrics 00:30:59.434 rmmod nvme_keyring 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:30:59.434 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 4043842 ']' 00:30:59.435 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 4043842 00:30:59.435 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 4043842 ']' 00:30:59.435 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 4043842 00:30:59.435 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:30:59.435 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:59.435 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4043842 00:30:59.435 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:59.435 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:59.435 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4043842' 00:30:59.435 killing process with pid 4043842 00:30:59.435 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 4043842 00:30:59.435 10:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 4043842 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.695 10:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.607 10:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:01.607 00:31:01.607 real 0m24.497s 00:31:01.607 user 0m29.068s 00:31:01.607 sys 0m7.510s 00:31:01.607 10:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:01.607 10:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:01.607 ************************************ 00:31:01.607 END TEST nvmf_discovery_remove_ifc 00:31:01.607 ************************************ 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.867 ************************************ 00:31:01.867 START TEST nvmf_identify_kernel_target 00:31:01.867 ************************************ 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:01.867 * Looking for test storage... 00:31:01.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.867 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.868 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.868 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:31:01.868 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:31:01.868 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.868 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.868 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:31:01.868 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:31:01.868 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.868 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:02.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.129 --rc genhtml_branch_coverage=1 00:31:02.129 --rc genhtml_function_coverage=1 00:31:02.129 --rc genhtml_legend=1 00:31:02.129 --rc geninfo_all_blocks=1 00:31:02.129 --rc geninfo_unexecuted_blocks=1 00:31:02.129 00:31:02.129 ' 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:02.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.129 --rc genhtml_branch_coverage=1 00:31:02.129 --rc genhtml_function_coverage=1 00:31:02.129 --rc genhtml_legend=1 00:31:02.129 --rc geninfo_all_blocks=1 00:31:02.129 --rc geninfo_unexecuted_blocks=1 00:31:02.129 00:31:02.129 ' 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:02.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.129 --rc genhtml_branch_coverage=1 00:31:02.129 --rc genhtml_function_coverage=1 00:31:02.129 --rc genhtml_legend=1 00:31:02.129 --rc geninfo_all_blocks=1 00:31:02.129 --rc geninfo_unexecuted_blocks=1 00:31:02.129 00:31:02.129 ' 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:02.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.129 --rc genhtml_branch_coverage=1 00:31:02.129 --rc genhtml_function_coverage=1 00:31:02.129 --rc genhtml_legend=1 00:31:02.129 --rc geninfo_all_blocks=1 00:31:02.129 --rc geninfo_unexecuted_blocks=1 00:31:02.129 00:31:02.129 ' 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.129 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:02.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:02.130 10:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:31:10.271 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:10.272 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:10.272 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:10.272 Found net devices under 0000:31:00.0: cvl_0_0 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:10.272 Found net devices under 0000:31:00.1: cvl_0_1 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:10.272 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:10.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:31:10.533 00:31:10.533 --- 10.0.0.2 ping statistics --- 00:31:10.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.533 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:31:10.533 00:31:10.533 --- 10.0.0.1 ping statistics --- 00:31:10.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.533 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:10.533 10:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:14.917 Waiting for block devices as requested 00:31:14.917 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:14.917 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:14.917 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:14.917 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:14.917 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:14.917 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:14.917 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:14.917 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:14.917 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:15.179 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:15.179 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:15.179 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:15.179 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:15.440 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:15.440 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:15.440 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:15.440 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:16.013 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:16.014 No valid GPT data, bailing 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:31:16.014 00:31:16.014 Discovery Log Number of Records 2, Generation counter 2 00:31:16.014 =====Discovery Log Entry 0====== 00:31:16.014 trtype: tcp 00:31:16.014 adrfam: ipv4 00:31:16.014 subtype: current discovery subsystem 00:31:16.014 treq: not specified, sq flow control disable supported 00:31:16.014 portid: 1 00:31:16.014 trsvcid: 4420 00:31:16.014 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:16.014 traddr: 10.0.0.1 00:31:16.014 eflags: none 00:31:16.014 sectype: none 00:31:16.014 =====Discovery Log Entry 1====== 00:31:16.014 trtype: tcp 00:31:16.014 adrfam: ipv4 00:31:16.014 subtype: nvme subsystem 00:31:16.014 treq: not specified, sq flow control disable supported 00:31:16.014 portid: 1 00:31:16.014 trsvcid: 4420 00:31:16.014 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:16.014 traddr: 10.0.0.1 00:31:16.014 eflags: none 00:31:16.014 sectype: none 00:31:16.014 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:16.014 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:16.014 ===================================================== 00:31:16.014 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:16.014 ===================================================== 00:31:16.014 Controller Capabilities/Features 00:31:16.014 ================================ 00:31:16.014 Vendor ID: 0000 00:31:16.014 Subsystem Vendor ID: 0000 00:31:16.014 Serial Number: 8d6ef84c49074ed9dfdf 00:31:16.014 Model Number: Linux 00:31:16.014 Firmware Version: 6.8.9-20 00:31:16.014 Recommended Arb Burst: 0 00:31:16.014 IEEE OUI Identifier: 00 00 00 00:31:16.014 Multi-path I/O 00:31:16.014 May have multiple subsystem ports: No 00:31:16.014 May have multiple controllers: No 00:31:16.014 Associated with SR-IOV VF: No 00:31:16.014 Max Data Transfer Size: Unlimited 00:31:16.014 Max Number of Namespaces: 0 00:31:16.014 Max Number of I/O Queues: 1024 00:31:16.014 NVMe Specification Version (VS): 1.3 00:31:16.014 NVMe Specification Version (Identify): 1.3 00:31:16.014 Maximum Queue Entries: 1024 00:31:16.014 Contiguous Queues Required: No 00:31:16.014 Arbitration Mechanisms Supported 00:31:16.014 Weighted Round Robin: Not Supported 00:31:16.014 Vendor Specific: Not Supported 00:31:16.014 Reset Timeout: 7500 ms 00:31:16.014 Doorbell Stride: 4 bytes 00:31:16.014 NVM Subsystem Reset: Not Supported 00:31:16.014 Command Sets Supported 00:31:16.014 NVM Command Set: Supported 00:31:16.014 Boot Partition: Not Supported 00:31:16.014 Memory Page Size Minimum: 4096 bytes 00:31:16.014 Memory Page Size Maximum: 4096 bytes 00:31:16.014 Persistent Memory Region: Not Supported 00:31:16.014 Optional Asynchronous Events Supported 00:31:16.014 Namespace Attribute Notices: Not Supported 00:31:16.014 Firmware Activation Notices: Not Supported 00:31:16.014 ANA Change Notices: Not Supported 00:31:16.014 PLE Aggregate Log Change Notices: Not Supported 00:31:16.014 LBA Status Info Alert Notices: Not Supported 00:31:16.014 EGE Aggregate Log Change Notices: Not Supported 00:31:16.014 Normal NVM Subsystem Shutdown event: Not Supported 00:31:16.014 Zone Descriptor Change Notices: Not Supported 00:31:16.014 Discovery Log Change Notices: Supported 00:31:16.014 Controller Attributes 00:31:16.014 128-bit Host Identifier: Not Supported 00:31:16.014 Non-Operational Permissive Mode: Not Supported 00:31:16.014 NVM Sets: Not Supported 00:31:16.014 Read Recovery Levels: Not Supported 00:31:16.014 Endurance Groups: Not Supported 00:31:16.014 Predictable Latency Mode: Not Supported 00:31:16.014 Traffic Based Keep ALive: Not Supported 00:31:16.014 Namespace Granularity: Not Supported 00:31:16.014 SQ Associations: Not Supported 00:31:16.014 UUID List: Not Supported 00:31:16.014 Multi-Domain Subsystem: Not Supported 00:31:16.014 Fixed Capacity Management: Not Supported 00:31:16.014 Variable Capacity Management: Not Supported 00:31:16.014 Delete Endurance Group: Not Supported 00:31:16.014 Delete NVM Set: Not Supported 00:31:16.014 Extended LBA Formats Supported: Not Supported 00:31:16.014 Flexible Data Placement Supported: Not Supported 00:31:16.014 00:31:16.014 Controller Memory Buffer Support 00:31:16.014 ================================ 00:31:16.014 Supported: No 00:31:16.014 00:31:16.014 Persistent Memory Region Support 00:31:16.014 ================================ 00:31:16.014 Supported: No 00:31:16.014 00:31:16.014 Admin Command Set Attributes 00:31:16.014 ============================ 00:31:16.014 Security Send/Receive: Not Supported 00:31:16.014 Format NVM: Not Supported 00:31:16.014 Firmware Activate/Download: Not Supported 00:31:16.014 Namespace Management: Not Supported 00:31:16.014 Device Self-Test: Not Supported 00:31:16.014 Directives: Not Supported 00:31:16.014 NVMe-MI: Not Supported 00:31:16.014 Virtualization Management: Not Supported 00:31:16.014 Doorbell Buffer Config: Not Supported 00:31:16.014 Get LBA Status Capability: Not Supported 00:31:16.014 Command & Feature Lockdown Capability: Not Supported 00:31:16.014 Abort Command Limit: 1 00:31:16.014 Async Event Request Limit: 1 00:31:16.014 Number of Firmware Slots: N/A 00:31:16.014 Firmware Slot 1 Read-Only: N/A 00:31:16.014 Firmware Activation Without Reset: N/A 00:31:16.014 Multiple Update Detection Support: N/A 00:31:16.014 Firmware Update Granularity: No Information Provided 00:31:16.014 Per-Namespace SMART Log: No 00:31:16.014 Asymmetric Namespace Access Log Page: Not Supported 00:31:16.014 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:16.014 Command Effects Log Page: Not Supported 00:31:16.014 Get Log Page Extended Data: Supported 00:31:16.015 Telemetry Log Pages: Not Supported 00:31:16.015 Persistent Event Log Pages: Not Supported 00:31:16.015 Supported Log Pages Log Page: May Support 00:31:16.015 Commands Supported & Effects Log Page: Not Supported 00:31:16.015 Feature Identifiers & Effects Log Page:May Support 00:31:16.015 NVMe-MI Commands & Effects Log Page: May Support 00:31:16.015 Data Area 4 for Telemetry Log: Not Supported 00:31:16.015 Error Log Page Entries Supported: 1 00:31:16.015 Keep Alive: Not Supported 00:31:16.015 00:31:16.015 NVM Command Set Attributes 00:31:16.015 ========================== 00:31:16.015 Submission Queue Entry Size 00:31:16.015 Max: 1 00:31:16.015 Min: 1 00:31:16.015 Completion Queue Entry Size 00:31:16.015 Max: 1 00:31:16.015 Min: 1 00:31:16.015 Number of Namespaces: 0 00:31:16.015 Compare Command: Not Supported 00:31:16.015 Write Uncorrectable Command: Not Supported 00:31:16.015 Dataset Management Command: Not Supported 00:31:16.015 Write Zeroes Command: Not Supported 00:31:16.015 Set Features Save Field: Not Supported 00:31:16.015 Reservations: Not Supported 00:31:16.015 Timestamp: Not Supported 00:31:16.015 Copy: Not Supported 00:31:16.015 Volatile Write Cache: Not Present 00:31:16.015 Atomic Write Unit (Normal): 1 00:31:16.015 Atomic Write Unit (PFail): 1 00:31:16.015 Atomic Compare & Write Unit: 1 00:31:16.015 Fused Compare & Write: Not Supported 00:31:16.015 Scatter-Gather List 00:31:16.015 SGL Command Set: Supported 00:31:16.015 SGL Keyed: Not Supported 00:31:16.015 SGL Bit Bucket Descriptor: Not Supported 00:31:16.015 SGL Metadata Pointer: Not Supported 00:31:16.015 Oversized SGL: Not Supported 00:31:16.015 SGL Metadata Address: Not Supported 00:31:16.015 SGL Offset: Supported 00:31:16.015 Transport SGL Data Block: Not Supported 00:31:16.015 Replay Protected Memory Block: Not Supported 00:31:16.015 00:31:16.015 Firmware Slot Information 00:31:16.015 ========================= 00:31:16.015 Active slot: 0 00:31:16.015 00:31:16.015 00:31:16.015 Error Log 00:31:16.015 ========= 00:31:16.015 00:31:16.015 Active Namespaces 00:31:16.015 ================= 00:31:16.015 Discovery Log Page 00:31:16.015 ================== 00:31:16.015 Generation Counter: 2 00:31:16.015 Number of Records: 2 00:31:16.015 Record Format: 0 00:31:16.015 00:31:16.015 Discovery Log Entry 0 00:31:16.015 ---------------------- 00:31:16.015 Transport Type: 3 (TCP) 00:31:16.015 Address Family: 1 (IPv4) 00:31:16.015 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:16.015 Entry Flags: 00:31:16.015 Duplicate Returned Information: 0 00:31:16.015 Explicit Persistent Connection Support for Discovery: 0 00:31:16.015 Transport Requirements: 00:31:16.015 Secure Channel: Not Specified 00:31:16.015 Port ID: 1 (0x0001) 00:31:16.015 Controller ID: 65535 (0xffff) 00:31:16.015 Admin Max SQ Size: 32 00:31:16.015 Transport Service Identifier: 4420 00:31:16.015 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:16.015 Transport Address: 10.0.0.1 00:31:16.015 Discovery Log Entry 1 00:31:16.015 ---------------------- 00:31:16.015 Transport Type: 3 (TCP) 00:31:16.015 Address Family: 1 (IPv4) 00:31:16.015 Subsystem Type: 2 (NVM Subsystem) 00:31:16.015 Entry Flags: 00:31:16.015 Duplicate Returned Information: 0 00:31:16.015 Explicit Persistent Connection Support for Discovery: 0 00:31:16.015 Transport Requirements: 00:31:16.015 Secure Channel: Not Specified 00:31:16.015 Port ID: 1 (0x0001) 00:31:16.015 Controller ID: 65535 (0xffff) 00:31:16.015 Admin Max SQ Size: 32 00:31:16.015 Transport Service Identifier: 4420 00:31:16.015 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:16.015 Transport Address: 10.0.0.1 00:31:16.015 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:16.277 get_feature(0x01) failed 00:31:16.277 get_feature(0x02) failed 00:31:16.277 get_feature(0x04) failed 00:31:16.277 ===================================================== 00:31:16.277 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:16.277 ===================================================== 00:31:16.277 Controller Capabilities/Features 00:31:16.277 ================================ 00:31:16.277 Vendor ID: 0000 00:31:16.277 Subsystem Vendor ID: 0000 00:31:16.277 Serial Number: 3064aef8c0f373614233 00:31:16.277 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:16.277 Firmware Version: 6.8.9-20 00:31:16.277 Recommended Arb Burst: 6 00:31:16.277 IEEE OUI Identifier: 00 00 00 00:31:16.277 Multi-path I/O 00:31:16.277 May have multiple subsystem ports: Yes 00:31:16.277 May have multiple controllers: Yes 00:31:16.277 Associated with SR-IOV VF: No 00:31:16.277 Max Data Transfer Size: Unlimited 00:31:16.277 Max Number of Namespaces: 1024 00:31:16.277 Max Number of I/O Queues: 128 00:31:16.277 NVMe Specification Version (VS): 1.3 00:31:16.277 NVMe Specification Version (Identify): 1.3 00:31:16.277 Maximum Queue Entries: 1024 00:31:16.277 Contiguous Queues Required: No 00:31:16.277 Arbitration Mechanisms Supported 00:31:16.277 Weighted Round Robin: Not Supported 00:31:16.277 Vendor Specific: Not Supported 00:31:16.277 Reset Timeout: 7500 ms 00:31:16.277 Doorbell Stride: 4 bytes 00:31:16.277 NVM Subsystem Reset: Not Supported 00:31:16.277 Command Sets Supported 00:31:16.277 NVM Command Set: Supported 00:31:16.277 Boot Partition: Not Supported 00:31:16.277 Memory Page Size Minimum: 4096 bytes 00:31:16.277 Memory Page Size Maximum: 4096 bytes 00:31:16.277 Persistent Memory Region: Not Supported 00:31:16.277 Optional Asynchronous Events Supported 00:31:16.277 Namespace Attribute Notices: Supported 00:31:16.277 Firmware Activation Notices: Not Supported 00:31:16.277 ANA Change Notices: Supported 00:31:16.277 PLE Aggregate Log Change Notices: Not Supported 00:31:16.277 LBA Status Info Alert Notices: Not Supported 00:31:16.277 EGE Aggregate Log Change Notices: Not Supported 00:31:16.277 Normal NVM Subsystem Shutdown event: Not Supported 00:31:16.277 Zone Descriptor Change Notices: Not Supported 00:31:16.277 Discovery Log Change Notices: Not Supported 00:31:16.277 Controller Attributes 00:31:16.277 128-bit Host Identifier: Supported 00:31:16.277 Non-Operational Permissive Mode: Not Supported 00:31:16.277 NVM Sets: Not Supported 00:31:16.277 Read Recovery Levels: Not Supported 00:31:16.277 Endurance Groups: Not Supported 00:31:16.277 Predictable Latency Mode: Not Supported 00:31:16.277 Traffic Based Keep ALive: Supported 00:31:16.277 Namespace Granularity: Not Supported 00:31:16.277 SQ Associations: Not Supported 00:31:16.277 UUID List: Not Supported 00:31:16.277 Multi-Domain Subsystem: Not Supported 00:31:16.277 Fixed Capacity Management: Not Supported 00:31:16.277 Variable Capacity Management: Not Supported 00:31:16.277 Delete Endurance Group: Not Supported 00:31:16.277 Delete NVM Set: Not Supported 00:31:16.277 Extended LBA Formats Supported: Not Supported 00:31:16.277 Flexible Data Placement Supported: Not Supported 00:31:16.277 00:31:16.277 Controller Memory Buffer Support 00:31:16.277 ================================ 00:31:16.277 Supported: No 00:31:16.277 00:31:16.277 Persistent Memory Region Support 00:31:16.277 ================================ 00:31:16.277 Supported: No 00:31:16.277 00:31:16.277 Admin Command Set Attributes 00:31:16.277 ============================ 00:31:16.277 Security Send/Receive: Not Supported 00:31:16.277 Format NVM: Not Supported 00:31:16.277 Firmware Activate/Download: Not Supported 00:31:16.277 Namespace Management: Not Supported 00:31:16.277 Device Self-Test: Not Supported 00:31:16.277 Directives: Not Supported 00:31:16.277 NVMe-MI: Not Supported 00:31:16.277 Virtualization Management: Not Supported 00:31:16.277 Doorbell Buffer Config: Not Supported 00:31:16.277 Get LBA Status Capability: Not Supported 00:31:16.277 Command & Feature Lockdown Capability: Not Supported 00:31:16.277 Abort Command Limit: 4 00:31:16.277 Async Event Request Limit: 4 00:31:16.277 Number of Firmware Slots: N/A 00:31:16.277 Firmware Slot 1 Read-Only: N/A 00:31:16.277 Firmware Activation Without Reset: N/A 00:31:16.277 Multiple Update Detection Support: N/A 00:31:16.277 Firmware Update Granularity: No Information Provided 00:31:16.277 Per-Namespace SMART Log: Yes 00:31:16.277 Asymmetric Namespace Access Log Page: Supported 00:31:16.277 ANA Transition Time : 10 sec 00:31:16.277 00:31:16.277 Asymmetric Namespace Access Capabilities 00:31:16.277 ANA Optimized State : Supported 00:31:16.277 ANA Non-Optimized State : Supported 00:31:16.277 ANA Inaccessible State : Supported 00:31:16.277 ANA Persistent Loss State : Supported 00:31:16.277 ANA Change State : Supported 00:31:16.277 ANAGRPID is not changed : No 00:31:16.277 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:16.277 00:31:16.277 ANA Group Identifier Maximum : 128 00:31:16.277 Number of ANA Group Identifiers : 128 00:31:16.277 Max Number of Allowed Namespaces : 1024 00:31:16.277 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:16.277 Command Effects Log Page: Supported 00:31:16.277 Get Log Page Extended Data: Supported 00:31:16.277 Telemetry Log Pages: Not Supported 00:31:16.277 Persistent Event Log Pages: Not Supported 00:31:16.277 Supported Log Pages Log Page: May Support 00:31:16.277 Commands Supported & Effects Log Page: Not Supported 00:31:16.277 Feature Identifiers & Effects Log Page:May Support 00:31:16.277 NVMe-MI Commands & Effects Log Page: May Support 00:31:16.277 Data Area 4 for Telemetry Log: Not Supported 00:31:16.277 Error Log Page Entries Supported: 128 00:31:16.277 Keep Alive: Supported 00:31:16.277 Keep Alive Granularity: 1000 ms 00:31:16.277 00:31:16.277 NVM Command Set Attributes 00:31:16.277 ========================== 00:31:16.277 Submission Queue Entry Size 00:31:16.277 Max: 64 00:31:16.277 Min: 64 00:31:16.277 Completion Queue Entry Size 00:31:16.277 Max: 16 00:31:16.277 Min: 16 00:31:16.277 Number of Namespaces: 1024 00:31:16.277 Compare Command: Not Supported 00:31:16.277 Write Uncorrectable Command: Not Supported 00:31:16.277 Dataset Management Command: Supported 00:31:16.277 Write Zeroes Command: Supported 00:31:16.277 Set Features Save Field: Not Supported 00:31:16.277 Reservations: Not Supported 00:31:16.277 Timestamp: Not Supported 00:31:16.277 Copy: Not Supported 00:31:16.277 Volatile Write Cache: Present 00:31:16.277 Atomic Write Unit (Normal): 1 00:31:16.277 Atomic Write Unit (PFail): 1 00:31:16.277 Atomic Compare & Write Unit: 1 00:31:16.277 Fused Compare & Write: Not Supported 00:31:16.277 Scatter-Gather List 00:31:16.277 SGL Command Set: Supported 00:31:16.277 SGL Keyed: Not Supported 00:31:16.277 SGL Bit Bucket Descriptor: Not Supported 00:31:16.277 SGL Metadata Pointer: Not Supported 00:31:16.277 Oversized SGL: Not Supported 00:31:16.277 SGL Metadata Address: Not Supported 00:31:16.277 SGL Offset: Supported 00:31:16.277 Transport SGL Data Block: Not Supported 00:31:16.277 Replay Protected Memory Block: Not Supported 00:31:16.277 00:31:16.277 Firmware Slot Information 00:31:16.277 ========================= 00:31:16.277 Active slot: 0 00:31:16.277 00:31:16.277 Asymmetric Namespace Access 00:31:16.277 =========================== 00:31:16.277 Change Count : 0 00:31:16.277 Number of ANA Group Descriptors : 1 00:31:16.277 ANA Group Descriptor : 0 00:31:16.277 ANA Group ID : 1 00:31:16.277 Number of NSID Values : 1 00:31:16.277 Change Count : 0 00:31:16.277 ANA State : 1 00:31:16.277 Namespace Identifier : 1 00:31:16.277 00:31:16.277 Commands Supported and Effects 00:31:16.277 ============================== 00:31:16.277 Admin Commands 00:31:16.277 -------------- 00:31:16.277 Get Log Page (02h): Supported 00:31:16.277 Identify (06h): Supported 00:31:16.277 Abort (08h): Supported 00:31:16.277 Set Features (09h): Supported 00:31:16.277 Get Features (0Ah): Supported 00:31:16.277 Asynchronous Event Request (0Ch): Supported 00:31:16.277 Keep Alive (18h): Supported 00:31:16.277 I/O Commands 00:31:16.277 ------------ 00:31:16.277 Flush (00h): Supported 00:31:16.277 Write (01h): Supported LBA-Change 00:31:16.277 Read (02h): Supported 00:31:16.277 Write Zeroes (08h): Supported LBA-Change 00:31:16.277 Dataset Management (09h): Supported 00:31:16.277 00:31:16.277 Error Log 00:31:16.277 ========= 00:31:16.277 Entry: 0 00:31:16.277 Error Count: 0x3 00:31:16.277 Submission Queue Id: 0x0 00:31:16.277 Command Id: 0x5 00:31:16.278 Phase Bit: 0 00:31:16.278 Status Code: 0x2 00:31:16.278 Status Code Type: 0x0 00:31:16.278 Do Not Retry: 1 00:31:16.278 Error Location: 0x28 00:31:16.278 LBA: 0x0 00:31:16.278 Namespace: 0x0 00:31:16.278 Vendor Log Page: 0x0 00:31:16.278 ----------- 00:31:16.278 Entry: 1 00:31:16.278 Error Count: 0x2 00:31:16.278 Submission Queue Id: 0x0 00:31:16.278 Command Id: 0x5 00:31:16.278 Phase Bit: 0 00:31:16.278 Status Code: 0x2 00:31:16.278 Status Code Type: 0x0 00:31:16.278 Do Not Retry: 1 00:31:16.278 Error Location: 0x28 00:31:16.278 LBA: 0x0 00:31:16.278 Namespace: 0x0 00:31:16.278 Vendor Log Page: 0x0 00:31:16.278 ----------- 00:31:16.278 Entry: 2 00:31:16.278 Error Count: 0x1 00:31:16.278 Submission Queue Id: 0x0 00:31:16.278 Command Id: 0x4 00:31:16.278 Phase Bit: 0 00:31:16.278 Status Code: 0x2 00:31:16.278 Status Code Type: 0x0 00:31:16.278 Do Not Retry: 1 00:31:16.278 Error Location: 0x28 00:31:16.278 LBA: 0x0 00:31:16.278 Namespace: 0x0 00:31:16.278 Vendor Log Page: 0x0 00:31:16.278 00:31:16.278 Number of Queues 00:31:16.278 ================ 00:31:16.278 Number of I/O Submission Queues: 128 00:31:16.278 Number of I/O Completion Queues: 128 00:31:16.278 00:31:16.278 ZNS Specific Controller Data 00:31:16.278 ============================ 00:31:16.278 Zone Append Size Limit: 0 00:31:16.278 00:31:16.278 00:31:16.278 Active Namespaces 00:31:16.278 ================= 00:31:16.278 get_feature(0x05) failed 00:31:16.278 Namespace ID:1 00:31:16.278 Command Set Identifier: NVM (00h) 00:31:16.278 Deallocate: Supported 00:31:16.278 Deallocated/Unwritten Error: Not Supported 00:31:16.278 Deallocated Read Value: Unknown 00:31:16.278 Deallocate in Write Zeroes: Not Supported 00:31:16.278 Deallocated Guard Field: 0xFFFF 00:31:16.278 Flush: Supported 00:31:16.278 Reservation: Not Supported 00:31:16.278 Namespace Sharing Capabilities: Multiple Controllers 00:31:16.278 Size (in LBAs): 3750748848 (1788GiB) 00:31:16.278 Capacity (in LBAs): 3750748848 (1788GiB) 00:31:16.278 Utilization (in LBAs): 3750748848 (1788GiB) 00:31:16.278 UUID: dd3b2c7e-fb66-490b-9e1a-228d0117eb22 00:31:16.278 Thin Provisioning: Not Supported 00:31:16.278 Per-NS Atomic Units: Yes 00:31:16.278 Atomic Write Unit (Normal): 8 00:31:16.278 Atomic Write Unit (PFail): 8 00:31:16.278 Preferred Write Granularity: 8 00:31:16.278 Atomic Compare & Write Unit: 8 00:31:16.278 Atomic Boundary Size (Normal): 0 00:31:16.278 Atomic Boundary Size (PFail): 0 00:31:16.278 Atomic Boundary Offset: 0 00:31:16.278 NGUID/EUI64 Never Reused: No 00:31:16.278 ANA group ID: 1 00:31:16.278 Namespace Write Protected: No 00:31:16.278 Number of LBA Formats: 1 00:31:16.278 Current LBA Format: LBA Format #00 00:31:16.278 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:16.278 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:16.278 rmmod nvme_tcp 00:31:16.278 rmmod nvme_fabrics 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.278 10:23:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.826 10:23:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:18.826 10:23:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:18.826 10:23:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:18.826 10:23:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:31:18.826 10:23:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:18.826 10:23:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:18.826 10:23:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:18.826 10:23:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:18.826 10:23:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:18.826 10:23:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:31:18.826 10:23:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:22.134 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:22.134 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:22.395 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:22.655 00:31:22.655 real 0m20.815s 00:31:22.655 user 0m5.707s 00:31:22.655 sys 0m12.142s 00:31:22.655 10:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:22.655 10:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:22.655 ************************************ 00:31:22.655 END TEST nvmf_identify_kernel_target 00:31:22.655 ************************************ 00:31:22.655 10:23:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:22.655 10:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:22.655 10:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:22.655 10:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.655 ************************************ 00:31:22.655 START TEST nvmf_auth_host 00:31:22.655 ************************************ 00:31:22.655 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:22.916 * Looking for test storage... 00:31:22.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.916 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:22.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.917 --rc genhtml_branch_coverage=1 00:31:22.917 --rc genhtml_function_coverage=1 00:31:22.917 --rc genhtml_legend=1 00:31:22.917 --rc geninfo_all_blocks=1 00:31:22.917 --rc geninfo_unexecuted_blocks=1 00:31:22.917 00:31:22.917 ' 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:22.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.917 --rc genhtml_branch_coverage=1 00:31:22.917 --rc genhtml_function_coverage=1 00:31:22.917 --rc genhtml_legend=1 00:31:22.917 --rc geninfo_all_blocks=1 00:31:22.917 --rc geninfo_unexecuted_blocks=1 00:31:22.917 00:31:22.917 ' 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:22.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.917 --rc genhtml_branch_coverage=1 00:31:22.917 --rc genhtml_function_coverage=1 00:31:22.917 --rc genhtml_legend=1 00:31:22.917 --rc geninfo_all_blocks=1 00:31:22.917 --rc geninfo_unexecuted_blocks=1 00:31:22.917 00:31:22.917 ' 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:22.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.917 --rc genhtml_branch_coverage=1 00:31:22.917 --rc genhtml_function_coverage=1 00:31:22.917 --rc genhtml_legend=1 00:31:22.917 --rc geninfo_all_blocks=1 00:31:22.917 --rc geninfo_unexecuted_blocks=1 00:31:22.917 00:31:22.917 ' 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:22.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:22.917 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:22.918 10:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:31.065 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:31.065 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:31.065 Found net devices under 0000:31:00.0: cvl_0_0 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:31.065 Found net devices under 0000:31:00.1: cvl_0_1 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.065 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:31.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.727 ms 00:31:31.066 00:31:31.066 --- 10.0.0.2 ping statistics --- 00:31:31.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.066 rtt min/avg/max/mdev = 0.727/0.727/0.727/0.000 ms 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:31:31.066 00:31:31.066 --- 10.0.0.1 ping statistics --- 00:31:31.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.066 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=4059670 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 4059670 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 4059670 ']' 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:31.066 10:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=762cccc45d9d680ae155c520c2d7fd94 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Yr4 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 762cccc45d9d680ae155c520c2d7fd94 0 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 762cccc45d9d680ae155c520c2d7fd94 0 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=762cccc45d9d680ae155c520c2d7fd94 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Yr4 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Yr4 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Yr4 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:32.010 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5f683a5aa0d22d938862b9c9f1399ed99d15b99fa0d3acdb041496ecad0d9c17 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xCw 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5f683a5aa0d22d938862b9c9f1399ed99d15b99fa0d3acdb041496ecad0d9c17 3 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5f683a5aa0d22d938862b9c9f1399ed99d15b99fa0d3acdb041496ecad0d9c17 3 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5f683a5aa0d22d938862b9c9f1399ed99d15b99fa0d3acdb041496ecad0d9c17 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xCw 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xCw 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xCw 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=850d73ff7a1426702a6c2b64ff2c320112842d194d7c3d8e 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yIR 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 850d73ff7a1426702a6c2b64ff2c320112842d194d7c3d8e 0 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 850d73ff7a1426702a6c2b64ff2c320112842d194d7c3d8e 0 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=850d73ff7a1426702a6c2b64ff2c320112842d194d7c3d8e 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yIR 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yIR 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.yIR 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:32.272 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f396562b2cc1d819383ecd1268ed1a4500eb9d113ac7073f 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.YZr 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f396562b2cc1d819383ecd1268ed1a4500eb9d113ac7073f 2 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f396562b2cc1d819383ecd1268ed1a4500eb9d113ac7073f 2 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f396562b2cc1d819383ecd1268ed1a4500eb9d113ac7073f 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.YZr 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.YZr 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.YZr 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fe38189125db5937a50caf9f4f265b73 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.afh 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fe38189125db5937a50caf9f4f265b73 1 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fe38189125db5937a50caf9f4f265b73 1 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fe38189125db5937a50caf9f4f265b73 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.afh 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.afh 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.afh 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3edf2055f750603b9cfd8f6d5c69f49c 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.bkR 00:31:32.273 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3edf2055f750603b9cfd8f6d5c69f49c 1 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3edf2055f750603b9cfd8f6d5c69f49c 1 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3edf2055f750603b9cfd8f6d5c69f49c 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.bkR 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.bkR 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.bkR 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=97a2df0877bf347214226d214300682140cb93cf6e89fd93 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lzZ 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 97a2df0877bf347214226d214300682140cb93cf6e89fd93 2 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 97a2df0877bf347214226d214300682140cb93cf6e89fd93 2 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=97a2df0877bf347214226d214300682140cb93cf6e89fd93 00:31:32.534 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lzZ 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lzZ 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.lzZ 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2339bf7256e64a0b58e48f32f5981468 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2vg 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2339bf7256e64a0b58e48f32f5981468 0 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2339bf7256e64a0b58e48f32f5981468 0 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2339bf7256e64a0b58e48f32f5981468 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2vg 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2vg 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.2vg 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=664da702a0a845d8df7af1458c14753b8c8032cf0213ce44fb2a2101b56d625e 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1Hy 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 664da702a0a845d8df7af1458c14753b8c8032cf0213ce44fb2a2101b56d625e 3 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 664da702a0a845d8df7af1458c14753b8c8032cf0213ce44fb2a2101b56d625e 3 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=664da702a0a845d8df7af1458c14753b8c8032cf0213ce44fb2a2101b56d625e 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:32.535 10:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:32.535 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1Hy 00:31:32.535 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1Hy 00:31:32.535 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.1Hy 00:31:32.535 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:32.535 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4059670 00:31:32.535 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 4059670 ']' 00:31:32.535 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.535 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:32.535 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.535 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:32.535 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yr4 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xCw ]] 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xCw 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.yIR 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.YZr ]] 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YZr 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.afh 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.bkR ]] 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bkR 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.796 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.lzZ 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.2vg ]] 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.2vg 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.1Hy 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:31:32.797 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:33.058 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:33.058 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:33.058 10:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:36.402 Waiting for block devices as requested 00:31:36.402 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:36.402 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:36.662 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:36.662 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:36.662 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:36.923 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:36.923 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:36.923 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:37.184 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:37.184 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:37.445 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:37.445 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:37.445 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:37.445 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:37.705 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:37.705 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:37.705 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:38.645 10:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:38.645 10:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:38.645 10:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:38.645 10:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:38.645 10:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:38.645 10:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:38.645 10:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:38.645 10:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:38.645 10:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:38.645 No valid GPT data, bailing 00:31:38.645 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:38.645 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:31:38.645 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:31:38.645 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:38.645 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:38.645 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:38.646 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:31:38.906 00:31:38.906 Discovery Log Number of Records 2, Generation counter 2 00:31:38.906 =====Discovery Log Entry 0====== 00:31:38.906 trtype: tcp 00:31:38.906 adrfam: ipv4 00:31:38.906 subtype: current discovery subsystem 00:31:38.906 treq: not specified, sq flow control disable supported 00:31:38.906 portid: 1 00:31:38.906 trsvcid: 4420 00:31:38.906 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:38.906 traddr: 10.0.0.1 00:31:38.906 eflags: none 00:31:38.906 sectype: none 00:31:38.906 =====Discovery Log Entry 1====== 00:31:38.906 trtype: tcp 00:31:38.906 adrfam: ipv4 00:31:38.906 subtype: nvme subsystem 00:31:38.906 treq: not specified, sq flow control disable supported 00:31:38.906 portid: 1 00:31:38.906 trsvcid: 4420 00:31:38.906 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:38.906 traddr: 10.0.0.1 00:31:38.906 eflags: none 00:31:38.906 sectype: none 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.906 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.907 nvme0n1 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.907 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.167 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.167 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.167 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.167 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.167 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.167 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:39.167 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.168 nvme0n1 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:39.168 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.429 nvme0n1 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.429 10:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.689 nvme0n1 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.689 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.690 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.950 nvme0n1 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.950 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.211 nvme0n1 00:31:40.211 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.211 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.211 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.211 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.212 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.473 nvme0n1 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.473 10:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.734 nvme0n1 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.734 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.995 nvme0n1 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.995 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.256 nvme0n1 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:41.256 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.257 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.518 nvme0n1 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:41.518 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:41.519 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.519 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.519 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:41.519 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.519 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:41.519 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:41.519 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:41.519 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.519 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.519 10:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.779 nvme0n1 00:31:41.779 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.779 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.779 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.779 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.779 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.779 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.040 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.041 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.302 nvme0n1 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:42.302 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.303 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.563 nvme0n1 00:31:42.563 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.563 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.563 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.563 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.563 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.563 10:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.563 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.134 nvme0n1 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:43.134 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.135 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.396 nvme0n1 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.396 10:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.967 nvme0n1 00:31:43.967 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.967 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.967 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.967 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.967 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.967 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.967 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.967 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.967 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.967 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.967 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.968 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.539 nvme0n1 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.539 10:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.112 nvme0n1 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.112 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.372 nvme0n1 00:31:45.372 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.372 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.372 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.372 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.372 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.372 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:45.633 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:45.634 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:45.634 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.634 10:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.204 nvme0n1 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:46.204 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.205 10:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.774 nvme0n1 00:31:46.774 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.774 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.774 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.774 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.775 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.775 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.035 10:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.606 nvme0n1 00:31:47.606 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.606 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.606 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.606 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.606 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.606 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.872 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.444 nvme0n1 00:31:48.444 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.444 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.444 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.444 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.444 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.444 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.704 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:48.705 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:48.705 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:48.705 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:48.705 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.705 10:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.275 nvme0n1 00:31:49.275 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.275 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.275 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.275 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.275 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.275 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:49.535 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.536 10:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.105 nvme0n1 00:31:50.105 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.105 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.105 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.105 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.105 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.105 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.365 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.366 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.366 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.366 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.366 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.366 nvme0n1 00:31:50.366 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.366 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.366 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.366 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.366 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.366 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.366 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.627 10:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.627 nvme0n1 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.627 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.888 nvme0n1 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.888 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.889 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.149 nvme0n1 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.149 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.150 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.150 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.150 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.150 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.150 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.150 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.150 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.150 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:51.150 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.150 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.409 nvme0n1 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.409 10:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.669 nvme0n1 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.669 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.930 nvme0n1 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.930 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.191 nvme0n1 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.191 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.192 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.192 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:52.192 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.192 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.453 nvme0n1 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.453 10:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.714 nvme0n1 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.714 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.974 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.974 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.974 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.974 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.974 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.974 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:52.974 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.974 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.235 nvme0n1 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.235 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.496 nvme0n1 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.496 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.497 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.497 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.497 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.497 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:53.497 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.497 10:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.757 nvme0n1 00:31:53.757 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.757 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.757 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.757 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.757 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.757 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:54.017 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.018 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.278 nvme0n1 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.278 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.539 nvme0n1 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.539 10:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.539 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.110 nvme0n1 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.110 10:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.682 nvme0n1 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.682 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.252 nvme0n1 00:31:56.252 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.252 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.252 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.252 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.252 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.252 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.252 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.252 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.252 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.252 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.252 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.253 10:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.823 nvme0n1 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.823 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.392 nvme0n1 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.392 10:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.332 nvme0n1 00:31:58.332 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.332 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.332 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.332 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.332 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.333 10:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.902 nvme0n1 00:31:58.902 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.902 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.902 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.902 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.902 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.902 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.163 10:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.732 nvme0n1 00:31:59.732 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.732 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.732 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.732 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.732 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.732 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.732 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.732 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.732 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.732 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.992 10:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.562 nvme0n1 00:32:00.562 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.562 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.562 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.562 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.562 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.562 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.562 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.562 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.562 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.562 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.822 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.822 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.822 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:00.822 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.822 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.823 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.445 nvme0n1 00:32:01.446 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.446 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.446 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.446 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.446 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.446 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.446 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.446 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.446 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.446 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.792 10:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.792 nvme0n1 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.792 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.793 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.082 nvme0n1 00:32:02.082 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.082 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.082 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.082 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.082 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.082 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.082 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.083 nvme0n1 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.083 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.344 nvme0n1 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.344 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.605 10:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.605 nvme0n1 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:32:02.605 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.606 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.866 nvme0n1 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.866 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.867 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.128 nvme0n1 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.128 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:03.388 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.389 nvme0n1 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.389 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.649 10:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.649 nvme0n1 00:32:03.649 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.649 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.650 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.650 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.650 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.650 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 nvme0n1 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.171 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.432 nvme0n1 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.432 10:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.693 nvme0n1 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.693 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.263 nvme0n1 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:32:05.263 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.264 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.524 nvme0n1 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.524 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.525 10:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.785 nvme0n1 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:05.785 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.786 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.357 nvme0n1 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:06.357 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.358 10:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.929 nvme0n1 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:06.929 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.930 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.500 nvme0n1 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:32:07.500 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.501 10:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.071 nvme0n1 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:08.071 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:08.072 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:08.072 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.072 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.642 nvme0n1 00:32:08.642 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.642 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.642 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.642 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.642 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.642 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.642 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYyY2NjYzQ1ZDlkNjgwYWUxNTVjNTIwYzJkN2ZkOTTSnX52: 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: ]] 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWY2ODNhNWFhMGQyMmQ5Mzg4NjJiOWM5ZjEzOTllZDk5ZDE1Yjk5ZmEwZDNhY2RiMDQxNDk2ZWNhZDBkOWMxN5YRjrU=: 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.643 10:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.583 nvme0n1 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.583 10:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.153 nvme0n1 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:10.153 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.154 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.414 10:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.983 nvme0n1 00:32:10.983 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.983 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.983 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTdhMmRmMDg3N2JmMzQ3MjE0MjI2ZDIxNDMwMDY4MjE0MGNiOTNjZjZlODlmZDkz8jpS5g==: 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: ]] 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjMzOWJmNzI1NmU2NGEwYjU4ZTQ4ZjMyZjU5ODE0NjiYMrJJ: 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.984 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.244 10:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.814 nvme0n1 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:11.814 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjY0ZGE3MDJhMGE4NDVkOGRmN2FmMTQ1OGMxNDc1M2I4YzgwMzJjZjAyMTNjZTQ0ZmIyYTIxMDFiNTZkNjI1ZTdttNA=: 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.815 10:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.757 nvme0n1 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.757 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.757 request: 00:32:12.757 { 00:32:12.757 "name": "nvme0", 00:32:12.757 "trtype": "tcp", 00:32:12.757 "traddr": "10.0.0.1", 00:32:12.757 "adrfam": "ipv4", 00:32:12.757 "trsvcid": "4420", 00:32:12.757 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:12.757 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:12.757 "prchk_reftag": false, 00:32:12.757 "prchk_guard": false, 00:32:12.757 "hdgst": false, 00:32:12.757 "ddgst": false, 00:32:12.757 "allow_unrecognized_csi": false, 00:32:12.757 "method": "bdev_nvme_attach_controller", 00:32:12.757 "req_id": 1 00:32:12.757 } 00:32:12.757 Got JSON-RPC error response 00:32:12.757 response: 00:32:12.757 { 00:32:12.757 "code": -5, 00:32:12.757 "message": "Input/output error" 00:32:12.757 } 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.758 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.019 request: 00:32:13.019 { 00:32:13.019 "name": "nvme0", 00:32:13.019 "trtype": "tcp", 00:32:13.019 "traddr": "10.0.0.1", 00:32:13.019 "adrfam": "ipv4", 00:32:13.019 "trsvcid": "4420", 00:32:13.019 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:13.019 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:13.019 "prchk_reftag": false, 00:32:13.019 "prchk_guard": false, 00:32:13.019 "hdgst": false, 00:32:13.019 "ddgst": false, 00:32:13.019 "dhchap_key": "key2", 00:32:13.019 "allow_unrecognized_csi": false, 00:32:13.019 "method": "bdev_nvme_attach_controller", 00:32:13.019 "req_id": 1 00:32:13.019 } 00:32:13.019 Got JSON-RPC error response 00:32:13.019 response: 00:32:13.019 { 00:32:13.019 "code": -5, 00:32:13.019 "message": "Input/output error" 00:32:13.019 } 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.019 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.019 request: 00:32:13.019 { 00:32:13.019 "name": "nvme0", 00:32:13.019 "trtype": "tcp", 00:32:13.019 "traddr": "10.0.0.1", 00:32:13.019 "adrfam": "ipv4", 00:32:13.019 "trsvcid": "4420", 00:32:13.019 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:13.019 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:13.019 "prchk_reftag": false, 00:32:13.020 "prchk_guard": false, 00:32:13.020 "hdgst": false, 00:32:13.020 "ddgst": false, 00:32:13.020 "dhchap_key": "key1", 00:32:13.020 "dhchap_ctrlr_key": "ckey2", 00:32:13.020 "allow_unrecognized_csi": false, 00:32:13.020 "method": "bdev_nvme_attach_controller", 00:32:13.020 "req_id": 1 00:32:13.020 } 00:32:13.020 Got JSON-RPC error response 00:32:13.020 response: 00:32:13.020 { 00:32:13.020 "code": -5, 00:32:13.020 "message": "Input/output error" 00:32:13.020 } 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.020 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.280 nvme0n1 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.280 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.280 request: 00:32:13.281 { 00:32:13.281 "name": "nvme0", 00:32:13.281 "dhchap_key": "key1", 00:32:13.281 "dhchap_ctrlr_key": "ckey2", 00:32:13.281 "method": "bdev_nvme_set_keys", 00:32:13.281 "req_id": 1 00:32:13.281 } 00:32:13.281 Got JSON-RPC error response 00:32:13.281 response: 00:32:13.281 { 00:32:13.281 "code": -13, 00:32:13.281 "message": "Permission denied" 00:32:13.281 } 00:32:13.281 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:13.281 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:13.281 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:13.281 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:13.281 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:13.281 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.281 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:13.281 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.281 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.281 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.541 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:13.541 10:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:14.514 10:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.514 10:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:14.514 10:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.514 10:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.514 10:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.514 10:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:14.514 10:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUwZDczZmY3YTE0MjY3MDJhNmMyYjY0ZmYyYzMyMDExMjg0MmQxOTRkN2MzZDhlt5Cq7g==: 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: ]] 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM5NjU2MmIyY2MxZDgxOTM4M2VjZDEyNjhlZDFhNDUwMGViOWQxMTNhYzcwNzNmZo4kiw==: 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.456 10:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.717 nvme0n1 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmUzODE4OTEyNWRiNTkzN2E1MGNhZjlmNGYyNjViNzMWVG3R: 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: ]] 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2VkZjIwNTVmNzUwNjAzYjljZmQ4ZjZkNWM2OWY0OWO7pjxa: 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.717 request: 00:32:15.717 { 00:32:15.717 "name": "nvme0", 00:32:15.717 "dhchap_key": "key2", 00:32:15.717 "dhchap_ctrlr_key": "ckey1", 00:32:15.717 "method": "bdev_nvme_set_keys", 00:32:15.717 "req_id": 1 00:32:15.717 } 00:32:15.717 Got JSON-RPC error response 00:32:15.717 response: 00:32:15.717 { 00:32:15.717 "code": -13, 00:32:15.717 "message": "Permission denied" 00:32:15.717 } 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:15.717 10:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:16.670 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.670 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:16.670 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.670 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:16.930 rmmod nvme_tcp 00:32:16.930 rmmod nvme_fabrics 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 4059670 ']' 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 4059670 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 4059670 ']' 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 4059670 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4059670 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4059670' 00:32:16.930 killing process with pid 4059670 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 4059670 00:32:16.930 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 4059670 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.191 10:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:19.103 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:19.369 10:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:23.574 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:23.574 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:23.574 10:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Yr4 /tmp/spdk.key-null.yIR /tmp/spdk.key-sha256.afh /tmp/spdk.key-sha384.lzZ /tmp/spdk.key-sha512.1Hy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:23.574 10:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:27.778 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:27.778 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:27.778 00:32:27.778 real 1m4.811s 00:32:27.778 user 0m57.650s 00:32:27.778 sys 0m16.686s 00:32:27.778 10:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:27.778 10:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.778 ************************************ 00:32:27.778 END TEST nvmf_auth_host 00:32:27.778 ************************************ 00:32:27.778 10:24:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:27.778 10:24:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:27.778 10:24:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:27.778 10:24:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:27.778 10:24:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.778 ************************************ 00:32:27.778 START TEST nvmf_digest 00:32:27.778 ************************************ 00:32:27.778 10:24:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:27.778 * Looking for test storage... 00:32:27.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:32:27.778 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:27.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.779 --rc genhtml_branch_coverage=1 00:32:27.779 --rc genhtml_function_coverage=1 00:32:27.779 --rc genhtml_legend=1 00:32:27.779 --rc geninfo_all_blocks=1 00:32:27.779 --rc geninfo_unexecuted_blocks=1 00:32:27.779 00:32:27.779 ' 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:27.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.779 --rc genhtml_branch_coverage=1 00:32:27.779 --rc genhtml_function_coverage=1 00:32:27.779 --rc genhtml_legend=1 00:32:27.779 --rc geninfo_all_blocks=1 00:32:27.779 --rc geninfo_unexecuted_blocks=1 00:32:27.779 00:32:27.779 ' 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:27.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.779 --rc genhtml_branch_coverage=1 00:32:27.779 --rc genhtml_function_coverage=1 00:32:27.779 --rc genhtml_legend=1 00:32:27.779 --rc geninfo_all_blocks=1 00:32:27.779 --rc geninfo_unexecuted_blocks=1 00:32:27.779 00:32:27.779 ' 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:27.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.779 --rc genhtml_branch_coverage=1 00:32:27.779 --rc genhtml_function_coverage=1 00:32:27.779 --rc genhtml_legend=1 00:32:27.779 --rc geninfo_all_blocks=1 00:32:27.779 --rc geninfo_unexecuted_blocks=1 00:32:27.779 00:32:27.779 ' 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:27.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.779 10:24:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:35.917 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:35.917 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:35.918 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:35.918 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:35.918 Found net devices under 0000:31:00.0: cvl_0_0 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:35.918 Found net devices under 0000:31:00.1: cvl_0_1 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.918 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:36.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:32:36.179 00:32:36.179 --- 10.0.0.2 ping statistics --- 00:32:36.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.179 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:32:36.179 00:32:36.179 --- 10.0.0.1 ping statistics --- 00:32:36.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.179 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:36.179 ************************************ 00:32:36.179 START TEST nvmf_digest_clean 00:32:36.179 ************************************ 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:36.179 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:36.439 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=4078935 00:32:36.439 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 4078935 00:32:36.439 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:36.439 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4078935 ']' 00:32:36.439 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.439 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:36.439 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.439 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:36.439 10:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:36.439 [2024-11-06 10:24:39.734870] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:36.439 [2024-11-06 10:24:39.734922] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.439 [2024-11-06 10:24:39.821833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.439 [2024-11-06 10:24:39.859023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.439 [2024-11-06 10:24:39.859057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.439 [2024-11-06 10:24:39.859067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.439 [2024-11-06 10:24:39.859075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.439 [2024-11-06 10:24:39.859081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.439 [2024-11-06 10:24:39.859665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:37.380 null0 00:32:37.380 [2024-11-06 10:24:40.635834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.380 [2024-11-06 10:24:40.660039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4079168 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4079168 /var/tmp/bperf.sock 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4079168 ']' 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:37.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:37.380 10:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:37.380 [2024-11-06 10:24:40.718643] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:37.380 [2024-11-06 10:24:40.718691] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079168 ] 00:32:37.380 [2024-11-06 10:24:40.814174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.380 [2024-11-06 10:24:40.850100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.319 10:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:38.319 10:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:32:38.319 10:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:38.319 10:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:38.320 10:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:38.320 10:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:38.320 10:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:38.580 nvme0n1 00:32:38.580 10:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:38.580 10:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:38.840 Running I/O for 2 seconds... 00:32:40.721 19298.00 IOPS, 75.38 MiB/s [2024-11-06T09:24:44.222Z] 19383.50 IOPS, 75.72 MiB/s 00:32:40.721 Latency(us) 00:32:40.721 [2024-11-06T09:24:44.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.721 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:40.721 nvme0n1 : 2.00 19403.60 75.80 0.00 0.00 6589.49 2894.51 19879.25 00:32:40.721 [2024-11-06T09:24:44.222Z] =================================================================================================================== 00:32:40.721 [2024-11-06T09:24:44.222Z] Total : 19403.60 75.80 0.00 0.00 6589.49 2894.51 19879.25 00:32:40.721 { 00:32:40.721 "results": [ 00:32:40.721 { 00:32:40.721 "job": "nvme0n1", 00:32:40.721 "core_mask": "0x2", 00:32:40.721 "workload": "randread", 00:32:40.721 "status": "finished", 00:32:40.721 "queue_depth": 128, 00:32:40.721 "io_size": 4096, 00:32:40.721 "runtime": 2.004525, 00:32:40.721 "iops": 19403.59935645602, 00:32:40.721 "mibps": 75.79530998615633, 00:32:40.721 "io_failed": 0, 00:32:40.721 "io_timeout": 0, 00:32:40.721 "avg_latency_us": 6589.48813506449, 00:32:40.721 "min_latency_us": 2894.5066666666667, 00:32:40.721 "max_latency_us": 19879.253333333334 00:32:40.721 } 00:32:40.721 ], 00:32:40.721 "core_count": 1 00:32:40.721 } 00:32:40.721 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:40.721 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:40.721 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:40.721 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:40.721 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:40.721 | select(.opcode=="crc32c") 00:32:40.721 | "\(.module_name) \(.executed)"' 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4079168 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4079168 ']' 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4079168 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4079168 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4079168' 00:32:40.981 killing process with pid 4079168 00:32:40.981 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4079168 00:32:40.981 Received shutdown signal, test time was about 2.000000 seconds 00:32:40.981 00:32:40.981 Latency(us) 00:32:40.982 [2024-11-06T09:24:44.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.982 [2024-11-06T09:24:44.483Z] =================================================================================================================== 00:32:40.982 [2024-11-06T09:24:44.483Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4079168 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4079957 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4079957 /var/tmp/bperf.sock 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4079957 ']' 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:40.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:40.982 10:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.242 [2024-11-06 10:24:44.509241] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:41.242 [2024-11-06 10:24:44.509300] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079957 ] 00:32:41.242 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:41.242 Zero copy mechanism will not be used. 00:32:41.242 [2024-11-06 10:24:44.599373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.242 [2024-11-06 10:24:44.628312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.812 10:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:41.812 10:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:32:41.812 10:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:41.812 10:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:41.812 10:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:42.072 10:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:42.072 10:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:42.642 nvme0n1 00:32:42.642 10:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:42.642 10:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:42.642 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:42.642 Zero copy mechanism will not be used. 00:32:42.642 Running I/O for 2 seconds... 00:32:44.523 3041.00 IOPS, 380.12 MiB/s [2024-11-06T09:24:48.284Z] 3020.50 IOPS, 377.56 MiB/s 00:32:44.783 Latency(us) 00:32:44.783 [2024-11-06T09:24:48.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.783 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:44.783 nvme0n1 : 2.04 2963.97 370.50 0.00 0.00 5293.23 928.43 47841.28 00:32:44.783 [2024-11-06T09:24:48.284Z] =================================================================================================================== 00:32:44.783 [2024-11-06T09:24:48.284Z] Total : 2963.97 370.50 0.00 0.00 5293.23 928.43 47841.28 00:32:44.783 { 00:32:44.783 "results": [ 00:32:44.783 { 00:32:44.783 "job": "nvme0n1", 00:32:44.783 "core_mask": "0x2", 00:32:44.783 "workload": "randread", 00:32:44.783 "status": "finished", 00:32:44.783 "queue_depth": 16, 00:32:44.783 "io_size": 131072, 00:32:44.783 "runtime": 2.043543, 00:32:44.783 "iops": 2963.9699286973655, 00:32:44.783 "mibps": 370.4962410871707, 00:32:44.783 "io_failed": 0, 00:32:44.783 "io_timeout": 0, 00:32:44.783 "avg_latency_us": 5293.226807550493, 00:32:44.783 "min_latency_us": 928.4266666666666, 00:32:44.783 "max_latency_us": 47841.28 00:32:44.783 } 00:32:44.783 ], 00:32:44.783 "core_count": 1 00:32:44.783 } 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:44.783 | select(.opcode=="crc32c") 00:32:44.783 | "\(.module_name) \(.executed)"' 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4079957 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4079957 ']' 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4079957 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:44.783 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4079957 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4079957' 00:32:45.044 killing process with pid 4079957 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4079957 00:32:45.044 Received shutdown signal, test time was about 2.000000 seconds 00:32:45.044 00:32:45.044 Latency(us) 00:32:45.044 [2024-11-06T09:24:48.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.044 [2024-11-06T09:24:48.545Z] =================================================================================================================== 00:32:45.044 [2024-11-06T09:24:48.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4079957 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4080646 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4080646 /var/tmp/bperf.sock 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4080646 ']' 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:45.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:45.044 10:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:45.044 [2024-11-06 10:24:48.477654] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:45.044 [2024-11-06 10:24:48.477716] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080646 ] 00:32:45.305 [2024-11-06 10:24:48.565206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.305 [2024-11-06 10:24:48.594746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.874 10:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:45.874 10:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:32:45.874 10:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:45.874 10:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:45.874 10:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:46.134 10:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:46.135 10:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:46.394 nvme0n1 00:32:46.394 10:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:46.394 10:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:46.655 Running I/O for 2 seconds... 00:32:48.534 21385.00 IOPS, 83.54 MiB/s [2024-11-06T09:24:52.035Z] 21480.50 IOPS, 83.91 MiB/s 00:32:48.534 Latency(us) 00:32:48.534 [2024-11-06T09:24:52.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.534 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:48.534 nvme0n1 : 2.00 21506.34 84.01 0.00 0.00 5946.85 2266.45 13981.01 00:32:48.534 [2024-11-06T09:24:52.035Z] =================================================================================================================== 00:32:48.534 [2024-11-06T09:24:52.035Z] Total : 21506.34 84.01 0.00 0.00 5946.85 2266.45 13981.01 00:32:48.534 { 00:32:48.534 "results": [ 00:32:48.534 { 00:32:48.534 "job": "nvme0n1", 00:32:48.534 "core_mask": "0x2", 00:32:48.534 "workload": "randwrite", 00:32:48.534 "status": "finished", 00:32:48.534 "queue_depth": 128, 00:32:48.534 "io_size": 4096, 00:32:48.534 "runtime": 2.003549, 00:32:48.534 "iops": 21506.337004984656, 00:32:48.534 "mibps": 84.00912892572131, 00:32:48.534 "io_failed": 0, 00:32:48.534 "io_timeout": 0, 00:32:48.534 "avg_latency_us": 5946.848135409657, 00:32:48.534 "min_latency_us": 2266.4533333333334, 00:32:48.534 "max_latency_us": 13981.013333333334 00:32:48.534 } 00:32:48.534 ], 00:32:48.534 "core_count": 1 00:32:48.534 } 00:32:48.534 10:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:48.534 10:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:48.534 10:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:48.535 10:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:48.535 | select(.opcode=="crc32c") 00:32:48.535 | "\(.module_name) \(.executed)"' 00:32:48.535 10:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4080646 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4080646 ']' 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4080646 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4080646 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4080646' 00:32:48.794 killing process with pid 4080646 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4080646 00:32:48.794 Received shutdown signal, test time was about 2.000000 seconds 00:32:48.794 00:32:48.794 Latency(us) 00:32:48.794 [2024-11-06T09:24:52.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.794 [2024-11-06T09:24:52.295Z] =================================================================================================================== 00:32:48.794 [2024-11-06T09:24:52.295Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:48.794 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4080646 00:32:49.054 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:49.054 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:49.054 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:49.054 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:49.054 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:49.054 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:49.055 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:49.055 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4081331 00:32:49.055 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4081331 /var/tmp/bperf.sock 00:32:49.055 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4081331 ']' 00:32:49.055 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:49.055 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.055 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:49.055 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.055 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:49.055 10:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:49.055 [2024-11-06 10:24:52.355353] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:49.055 [2024-11-06 10:24:52.355409] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4081331 ] 00:32:49.055 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:49.055 Zero copy mechanism will not be used. 00:32:49.055 [2024-11-06 10:24:52.444819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.055 [2024-11-06 10:24:52.473483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.995 10:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:49.995 10:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:32:49.995 10:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:49.995 10:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:49.995 10:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:49.995 10:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.995 10:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:50.255 nvme0n1 00:32:50.255 10:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:50.255 10:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:50.515 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:50.515 Zero copy mechanism will not be used. 00:32:50.515 Running I/O for 2 seconds... 00:32:52.543 3192.00 IOPS, 399.00 MiB/s [2024-11-06T09:24:56.044Z] 3813.00 IOPS, 476.62 MiB/s 00:32:52.543 Latency(us) 00:32:52.543 [2024-11-06T09:24:56.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.543 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:52.543 nvme0n1 : 2.00 3813.90 476.74 0.00 0.00 4189.66 1815.89 10321.92 00:32:52.543 [2024-11-06T09:24:56.044Z] =================================================================================================================== 00:32:52.543 [2024-11-06T09:24:56.044Z] Total : 3813.90 476.74 0.00 0.00 4189.66 1815.89 10321.92 00:32:52.543 { 00:32:52.543 "results": [ 00:32:52.543 { 00:32:52.543 "job": "nvme0n1", 00:32:52.543 "core_mask": "0x2", 00:32:52.543 "workload": "randwrite", 00:32:52.543 "status": "finished", 00:32:52.543 "queue_depth": 16, 00:32:52.543 "io_size": 131072, 00:32:52.543 "runtime": 2.00451, 00:32:52.543 "iops": 3813.8996562750995, 00:32:52.543 "mibps": 476.73745703438743, 00:32:52.543 "io_failed": 0, 00:32:52.543 "io_timeout": 0, 00:32:52.543 "avg_latency_us": 4189.660616524961, 00:32:52.543 "min_latency_us": 1815.8933333333334, 00:32:52.543 "max_latency_us": 10321.92 00:32:52.543 } 00:32:52.543 ], 00:32:52.543 "core_count": 1 00:32:52.543 } 00:32:52.543 10:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:52.543 10:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:52.543 10:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:52.543 10:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:52.543 | select(.opcode=="crc32c") 00:32:52.543 | "\(.module_name) \(.executed)"' 00:32:52.543 10:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:52.543 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:52.543 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:52.543 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:52.543 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:52.543 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4081331 00:32:52.543 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4081331 ']' 00:32:52.543 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4081331 00:32:52.543 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:32:52.543 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:52.543 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4081331 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4081331' 00:32:52.804 killing process with pid 4081331 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4081331 00:32:52.804 Received shutdown signal, test time was about 2.000000 seconds 00:32:52.804 00:32:52.804 Latency(us) 00:32:52.804 [2024-11-06T09:24:56.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.804 [2024-11-06T09:24:56.305Z] =================================================================================================================== 00:32:52.804 [2024-11-06T09:24:56.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4081331 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4078935 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4078935 ']' 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4078935 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4078935 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4078935' 00:32:52.804 killing process with pid 4078935 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4078935 00:32:52.804 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4078935 00:32:53.065 00:32:53.065 real 0m16.668s 00:32:53.065 user 0m33.105s 00:32:53.065 sys 0m3.399s 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:53.065 ************************************ 00:32:53.065 END TEST nvmf_digest_clean 00:32:53.065 ************************************ 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:53.065 ************************************ 00:32:53.065 START TEST nvmf_digest_error 00:32:53.065 ************************************ 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=4082221 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 4082221 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4082221 ']' 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:53.065 10:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:53.065 [2024-11-06 10:24:56.469037] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:53.065 [2024-11-06 10:24:56.469090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.065 [2024-11-06 10:24:56.554389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.326 [2024-11-06 10:24:56.593252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.326 [2024-11-06 10:24:56.593287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.326 [2024-11-06 10:24:56.593295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.326 [2024-11-06 10:24:56.593301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.326 [2024-11-06 10:24:56.593307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.326 [2024-11-06 10:24:56.593912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:53.896 [2024-11-06 10:24:57.291893] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.896 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:53.896 null0 00:32:53.896 [2024-11-06 10:24:57.373963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.157 [2024-11-06 10:24:57.398179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4082399 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4082399 /var/tmp/bperf.sock 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4082399 ']' 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:54.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:54.157 10:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.157 [2024-11-06 10:24:57.453681] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:54.157 [2024-11-06 10:24:57.453731] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082399 ] 00:32:54.157 [2024-11-06 10:24:57.543192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.157 [2024-11-06 10:24:57.573119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.098 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:55.098 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:32:55.098 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:55.098 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:55.098 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:55.098 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.098 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:55.098 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.098 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:55.098 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:55.359 nvme0n1 00:32:55.359 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:55.359 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.359 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:55.359 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.359 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:55.359 10:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:55.619 Running I/O for 2 seconds... 00:32:55.619 [2024-11-06 10:24:58.948359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:58.948392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:58.948401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:58.960488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:58.960508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:58.960515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:58.971564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:58.971582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:58.971589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:58.984211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:58.984231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:58.984239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:58.996571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:58.996589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:58.996596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:59.008947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:59.008970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:59.008977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:59.022168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:59.022185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:59.022192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:59.035255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:59.035273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:59.035280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:59.048237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:59.048255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:59.048261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:59.059661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:59.059678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:59.059684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:59.071192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:59.071210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:59.071217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:59.083622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:59.083640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:59.083647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:59.096723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:59.096741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:59.096748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.619 [2024-11-06 10:24:59.110310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.619 [2024-11-06 10:24:59.110328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.619 [2024-11-06 10:24:59.110335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.122246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.122263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.122270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.133960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.133977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.133984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.147952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.147969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.147976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.161351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.161368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.161375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.171966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.171983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.171989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.185133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.185151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.185158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.196860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.196883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.196889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.210617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.210636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.210643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.222394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.222411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.222421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.233955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.233972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.233979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.247996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.248014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.248020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.258248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.258266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.258273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.271791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.271809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.271815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.285873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.285890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.285897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.298361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.298379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.298386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.309223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.309240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.309247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.323749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.323767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.323773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.336934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.336955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.336962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.348307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.348324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.348331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.360461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.360478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.360485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.880 [2024-11-06 10:24:59.373218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:55.880 [2024-11-06 10:24:59.373236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.880 [2024-11-06 10:24:59.373242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.386518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.386536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.386543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.400129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.400147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.400153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.410350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.410367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.410374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.424810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.424828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.424835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.439298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.439315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.439322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.452176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.452193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.452200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.464149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.464167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.464174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.476216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.476234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.476240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.488698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.488715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.488722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.500716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.500735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.500742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.514949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.514967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.514975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.528044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.528062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.528068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.538423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.538441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.538447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.550739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.550761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.550768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.563387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.563406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.563413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.577265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.577283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.577289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.591280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.591298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.591305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.602911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.142 [2024-11-06 10:24:59.602929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.142 [2024-11-06 10:24:59.602935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.142 [2024-11-06 10:24:59.614831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.143 [2024-11-06 10:24:59.614849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.143 [2024-11-06 10:24:59.614855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.143 [2024-11-06 10:24:59.628195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.143 [2024-11-06 10:24:59.628213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.143 [2024-11-06 10:24:59.628219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.143 [2024-11-06 10:24:59.641439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.143 [2024-11-06 10:24:59.641456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.143 [2024-11-06 10:24:59.641463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.403 [2024-11-06 10:24:59.651875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.403 [2024-11-06 10:24:59.651893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.403 [2024-11-06 10:24:59.651900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.403 [2024-11-06 10:24:59.666043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.403 [2024-11-06 10:24:59.666061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.403 [2024-11-06 10:24:59.666068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.403 [2024-11-06 10:24:59.679412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.403 [2024-11-06 10:24:59.679429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.403 [2024-11-06 10:24:59.679436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.403 [2024-11-06 10:24:59.690553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.403 [2024-11-06 10:24:59.690571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.403 [2024-11-06 10:24:59.690578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.403 [2024-11-06 10:24:59.703975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.403 [2024-11-06 10:24:59.703993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.403 [2024-11-06 10:24:59.703999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.403 [2024-11-06 10:24:59.715000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.403 [2024-11-06 10:24:59.715017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.403 [2024-11-06 10:24:59.715024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.403 [2024-11-06 10:24:59.728696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.403 [2024-11-06 10:24:59.728714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.403 [2024-11-06 10:24:59.728721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.403 [2024-11-06 10:24:59.741741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.403 [2024-11-06 10:24:59.741759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.403 [2024-11-06 10:24:59.741766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.403 [2024-11-06 10:24:59.754223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.754241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.754247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.404 [2024-11-06 10:24:59.766439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.766457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.766468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.404 [2024-11-06 10:24:59.779815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.779833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.779840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.404 [2024-11-06 10:24:59.791476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.791494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.791501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.404 [2024-11-06 10:24:59.804388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.804406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.804412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.404 [2024-11-06 10:24:59.815638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.815657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.815664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.404 [2024-11-06 10:24:59.829116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.829134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.829141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.404 [2024-11-06 10:24:59.841546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.841564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.841571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.404 [2024-11-06 10:24:59.854799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.854817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.854824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.404 [2024-11-06 10:24:59.867712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.867730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.867737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.404 [2024-11-06 10:24:59.879772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.879792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.879799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.404 [2024-11-06 10:24:59.892135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.404 [2024-11-06 10:24:59.892153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.404 [2024-11-06 10:24:59.892159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:24:59.904266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:24:59.904284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:24:59.904290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:24:59.919536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:24:59.919554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:24:59.919561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 19986.00 IOPS, 78.07 MiB/s [2024-11-06T09:25:00.166Z] [2024-11-06 10:24:59.934576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:24:59.934593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:24:59.934600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:24:59.949718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:24:59.949736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:24:59.949742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:24:59.960292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:24:59.960310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:24:59.960317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:24:59.975266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:24:59.975284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:24:59.975291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:24:59.988762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:24:59.988781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:24:59.988788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:25:00.001155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:25:00.001173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:25:00.001180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:25:00.015008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:25:00.015027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:25:00.015034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:25:00.024758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:25:00.024778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:25:00.024786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:25:00.038986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:25:00.039005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:25:00.039012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:25:00.054837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:25:00.054856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:25:00.054866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:25:00.066075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:25:00.066093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:25:00.066100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:25:00.079512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:25:00.079531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:25:00.079538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:25:00.091487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:25:00.091505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.665 [2024-11-06 10:25:00.091512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.665 [2024-11-06 10:25:00.103610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.665 [2024-11-06 10:25:00.103633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.666 [2024-11-06 10:25:00.103639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.666 [2024-11-06 10:25:00.117970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.666 [2024-11-06 10:25:00.117988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.666 [2024-11-06 10:25:00.117994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.666 [2024-11-06 10:25:00.129427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.666 [2024-11-06 10:25:00.129445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.666 [2024-11-06 10:25:00.129451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.666 [2024-11-06 10:25:00.141115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.666 [2024-11-06 10:25:00.141134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.666 [2024-11-06 10:25:00.141141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.666 [2024-11-06 10:25:00.154406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.666 [2024-11-06 10:25:00.154425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.666 [2024-11-06 10:25:00.154432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.167432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.167449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.167456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.180477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.180495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.180502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.194255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.194272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.194279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.205908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.205925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.205932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.216289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.216307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.216314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.229891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.229910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.229916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.242327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.242346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.242352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.255528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.255546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.255553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.268017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.268035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.268042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.281883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.281901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.281907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.292503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.292521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.292527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.306585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.306603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.306609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.319248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.319265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.319275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.332062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.332080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.332086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.345375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.345393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.345400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.354932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.354951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.354957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.368858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.368881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.368888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.382154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.382172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.382178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.395009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.395026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.395033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.407329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.407347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.407354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:56.927 [2024-11-06 10:25:00.419336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:56.927 [2024-11-06 10:25:00.419354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.927 [2024-11-06 10:25:00.419360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.431467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.431489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.431495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.444340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.444358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.444365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.456504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.456522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.456528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.470294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.470311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.470318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.483576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.483594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.483600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.495431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.495449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.495455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.507991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.508009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.508015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.520895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.520912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.520919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.534246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.534264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.534271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.545398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.545416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.545422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.558924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.558942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.558948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.571422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.571440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.571446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.585362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.585379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.585385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.599506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.599524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.599530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.609486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.609504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.609511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.623992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.624010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.624017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.638215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.638234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.638240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.648136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.648154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.648164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.661646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.661664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.661670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.673993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.674011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.674018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.189 [2024-11-06 10:25:00.687206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.189 [2024-11-06 10:25:00.687223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.189 [2024-11-06 10:25:00.687230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.450 [2024-11-06 10:25:00.698741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.450 [2024-11-06 10:25:00.698759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.450 [2024-11-06 10:25:00.698766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.450 [2024-11-06 10:25:00.712790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.450 [2024-11-06 10:25:00.712809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.712815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.726236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.726253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.726260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.736126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.736144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.736150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.748380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.748398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.748404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.761661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.761679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.761685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.776056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.776074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.776080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.788984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.789001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.789008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.798902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.798919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.798926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.812447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.812464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.812471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.826220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.826237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.826243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.835960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.835978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.835984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.849595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.849613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.849620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.862109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.862127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.862136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.875049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.875067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.875073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.887166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.887184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.887190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.900486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.900503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.900510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.913928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.913945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.913952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 [2024-11-06 10:25:00.926264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21f52f0) 00:32:57.451 [2024-11-06 10:25:00.926282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.451 [2024-11-06 10:25:00.926289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.451 20047.50 IOPS, 78.31 MiB/s 00:32:57.451 Latency(us) 00:32:57.451 [2024-11-06T09:25:00.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.451 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:57.451 nvme0n1 : 2.00 20066.23 78.38 0.00 0.00 6371.51 2321.07 18240.85 00:32:57.451 [2024-11-06T09:25:00.952Z] =================================================================================================================== 00:32:57.451 [2024-11-06T09:25:00.952Z] Total : 20066.23 78.38 0.00 0.00 6371.51 2321.07 18240.85 00:32:57.451 { 00:32:57.451 "results": [ 00:32:57.451 { 00:32:57.451 "job": "nvme0n1", 00:32:57.451 "core_mask": "0x2", 00:32:57.451 "workload": "randread", 00:32:57.451 "status": "finished", 00:32:57.451 "queue_depth": 128, 00:32:57.451 "io_size": 4096, 00:32:57.451 "runtime": 2.004512, 00:32:57.451 "iops": 20066.23058380294, 00:32:57.451 "mibps": 78.38371321798023, 00:32:57.451 "io_failed": 0, 00:32:57.451 "io_timeout": 0, 00:32:57.451 "avg_latency_us": 6371.507920675567, 00:32:57.451 "min_latency_us": 2321.0666666666666, 00:32:57.451 "max_latency_us": 18240.853333333333 00:32:57.451 } 00:32:57.451 ], 00:32:57.451 "core_count": 1 00:32:57.451 } 00:32:57.712 10:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:57.712 10:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:57.712 10:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:57.712 10:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:57.712 | .driver_specific 00:32:57.713 | .nvme_error 00:32:57.713 | .status_code 00:32:57.713 | .command_transient_transport_error' 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4082399 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4082399 ']' 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4082399 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4082399 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4082399' 00:32:57.713 killing process with pid 4082399 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4082399 00:32:57.713 Received shutdown signal, test time was about 2.000000 seconds 00:32:57.713 00:32:57.713 Latency(us) 00:32:57.713 [2024-11-06T09:25:01.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.713 [2024-11-06T09:25:01.214Z] =================================================================================================================== 00:32:57.713 [2024-11-06T09:25:01.214Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:57.713 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4082399 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4083103 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4083103 /var/tmp/bperf.sock 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4083103 ']' 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:57.974 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.974 [2024-11-06 10:25:01.321031] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:57.974 [2024-11-06 10:25:01.321084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4083103 ] 00:32:57.974 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:57.974 Zero copy mechanism will not be used. 00:32:57.974 [2024-11-06 10:25:01.377426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.974 [2024-11-06 10:25:01.406795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.234 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:58.234 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:32:58.234 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:58.234 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:58.234 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:58.234 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.234 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:58.234 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.234 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.234 10:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.494 nvme0n1 00:32:58.755 10:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:58.755 10:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.755 10:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:58.755 10:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.755 10:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:58.755 10:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:58.755 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:58.755 Zero copy mechanism will not be used. 00:32:58.755 Running I/O for 2 seconds... 00:32:58.755 [2024-11-06 10:25:02.116558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.755 [2024-11-06 10:25:02.116591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.755 [2024-11-06 10:25:02.116600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.755 [2024-11-06 10:25:02.125968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.755 [2024-11-06 10:25:02.125991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.755 [2024-11-06 10:25:02.125998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.755 [2024-11-06 10:25:02.135363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.755 [2024-11-06 10:25:02.135383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.755 [2024-11-06 10:25:02.135390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.755 [2024-11-06 10:25:02.146561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.755 [2024-11-06 10:25:02.146580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.755 [2024-11-06 10:25:02.146587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.755 [2024-11-06 10:25:02.158042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.755 [2024-11-06 10:25:02.158061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.755 [2024-11-06 10:25:02.158069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.755 [2024-11-06 10:25:02.166036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.756 [2024-11-06 10:25:02.166054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.756 [2024-11-06 10:25:02.166061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.756 [2024-11-06 10:25:02.176488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.756 [2024-11-06 10:25:02.176506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.756 [2024-11-06 10:25:02.176513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.756 [2024-11-06 10:25:02.186805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.756 [2024-11-06 10:25:02.186824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.756 [2024-11-06 10:25:02.186831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.756 [2024-11-06 10:25:02.198446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.756 [2024-11-06 10:25:02.198465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.756 [2024-11-06 10:25:02.198471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.756 [2024-11-06 10:25:02.207308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.756 [2024-11-06 10:25:02.207327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.756 [2024-11-06 10:25:02.207334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.756 [2024-11-06 10:25:02.217443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.756 [2024-11-06 10:25:02.217462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.756 [2024-11-06 10:25:02.217472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.756 [2024-11-06 10:25:02.227698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.756 [2024-11-06 10:25:02.227716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.756 [2024-11-06 10:25:02.227723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.756 [2024-11-06 10:25:02.239062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.756 [2024-11-06 10:25:02.239080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.756 [2024-11-06 10:25:02.239086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.756 [2024-11-06 10:25:02.250453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:58.756 [2024-11-06 10:25:02.250472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.756 [2024-11-06 10:25:02.250478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.262416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.262435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.017 [2024-11-06 10:25:02.262441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.272109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.272128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.017 [2024-11-06 10:25:02.272135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.280622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.280641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.017 [2024-11-06 10:25:02.280648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.291830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.291847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.017 [2024-11-06 10:25:02.291854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.302717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.302737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.017 [2024-11-06 10:25:02.302743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.313777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.313799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.017 [2024-11-06 10:25:02.313806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.321821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.321839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.017 [2024-11-06 10:25:02.321845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.328871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.328890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.017 [2024-11-06 10:25:02.328896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.339525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.339544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.017 [2024-11-06 10:25:02.339550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.347841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.347859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.017 [2024-11-06 10:25:02.347871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.359347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.359366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.017 [2024-11-06 10:25:02.359372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.017 [2024-11-06 10:25:02.367733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.017 [2024-11-06 10:25:02.367751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.367758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.377860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.377882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.377889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.386281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.386298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.386305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.397268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.397287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.397293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.407664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.407682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.407688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.418421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.418440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.418446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.428775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.428793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.428800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.437164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.437183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.437189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.449350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.449368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.449375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.458185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.458204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.458210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.467517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.467536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.467542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.478730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.478748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.478757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.489320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.489338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.489344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.497404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.497422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.497429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.018 [2024-11-06 10:25:02.507871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.018 [2024-11-06 10:25:02.507889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.018 [2024-11-06 10:25:02.507895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.520275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.520294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.520301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.531112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.531131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.531138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.541976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.541995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.542002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.552184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.552203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.552209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.564699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.564718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.564725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.577587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.577606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.577613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.589424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.589443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.589449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.599997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.600016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.600023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.610356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.610374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.610381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.621518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.621537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.621543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.632745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.632764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.632771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.643463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.643482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.643489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.654402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.279 [2024-11-06 10:25:02.654421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.279 [2024-11-06 10:25:02.654428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.279 [2024-11-06 10:25:02.666202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.280 [2024-11-06 10:25:02.666221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.280 [2024-11-06 10:25:02.666231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.280 [2024-11-06 10:25:02.676292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.280 [2024-11-06 10:25:02.676311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.280 [2024-11-06 10:25:02.676317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.280 [2024-11-06 10:25:02.687051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.280 [2024-11-06 10:25:02.687070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.280 [2024-11-06 10:25:02.687077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.280 [2024-11-06 10:25:02.697940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.280 [2024-11-06 10:25:02.697959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.280 [2024-11-06 10:25:02.697965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.280 [2024-11-06 10:25:02.706592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.280 [2024-11-06 10:25:02.706611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.280 [2024-11-06 10:25:02.706617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.280 [2024-11-06 10:25:02.715070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.280 [2024-11-06 10:25:02.715089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.280 [2024-11-06 10:25:02.715095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.280 [2024-11-06 10:25:02.726047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.280 [2024-11-06 10:25:02.726065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.280 [2024-11-06 10:25:02.726072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.280 [2024-11-06 10:25:02.736466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.280 [2024-11-06 10:25:02.736485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.280 [2024-11-06 10:25:02.736491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.280 [2024-11-06 10:25:02.746792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.280 [2024-11-06 10:25:02.746812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.280 [2024-11-06 10:25:02.746818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.280 [2024-11-06 10:25:02.756453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.280 [2024-11-06 10:25:02.756476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.280 [2024-11-06 10:25:02.756482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.280 [2024-11-06 10:25:02.768151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.280 [2024-11-06 10:25:02.768170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.280 [2024-11-06 10:25:02.768177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.541 [2024-11-06 10:25:02.779338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.541 [2024-11-06 10:25:02.779360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.541 [2024-11-06 10:25:02.779367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.541 [2024-11-06 10:25:02.790435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.790455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.790461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.801995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.802013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.802020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.813975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.813994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.814001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.826905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.826924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.826931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.838678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.838697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.838703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.850599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.850618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.850624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.862892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.862911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.862918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.872006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.872025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.872031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.881004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.881023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.881030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.888868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.888886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.888893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.899022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.899041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.899048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.910209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.910228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.910235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.921376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.921395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.921401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.932905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.932924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.932930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.941630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.941650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.941660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.952853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.952876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.952882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.963476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.963495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.963502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.972158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.972177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.972184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.983226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.983246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.983252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:02.993184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:02.993204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:02.993211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:03.003574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:03.003594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:03.003601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:03.011641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:03.011661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:03.011667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:03.021873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:03.021893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:03.021900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.542 [2024-11-06 10:25:03.032967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.542 [2024-11-06 10:25:03.032990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.542 [2024-11-06 10:25:03.032996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.803 [2024-11-06 10:25:03.043995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.044015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.044021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.803 [2024-11-06 10:25:03.054358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.054377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.054383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.803 [2024-11-06 10:25:03.062446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.062466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.062473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.803 [2024-11-06 10:25:03.072932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.072951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.072958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.803 [2024-11-06 10:25:03.083464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.083484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.083491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.803 [2024-11-06 10:25:03.093475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.093494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.093501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.803 [2024-11-06 10:25:03.104937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.104957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.104963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.803 2989.00 IOPS, 373.62 MiB/s [2024-11-06T09:25:03.304Z] [2024-11-06 10:25:03.116681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.116701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.116707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.803 [2024-11-06 10:25:03.125903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.125923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.125930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.803 [2024-11-06 10:25:03.138461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.138480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.138486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.803 [2024-11-06 10:25:03.151032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.151051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.151057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.803 [2024-11-06 10:25:03.160633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.803 [2024-11-06 10:25:03.160652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.803 [2024-11-06 10:25:03.160658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.169428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.169448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.169455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.179077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.179095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.179102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.184774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.184793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.184799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.195818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.195837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.195843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.206796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.206815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.206825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.216710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.216728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.216735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.227802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.227821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.227827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.239811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.239830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.239837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.251091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.251109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.251116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.261449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.261468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.261475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.271816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.271834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.271841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.282687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.282705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.282712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.804 [2024-11-06 10:25:03.294394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:32:59.804 [2024-11-06 10:25:03.294412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.804 [2024-11-06 10:25:03.294419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.065 [2024-11-06 10:25:03.305984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.065 [2024-11-06 10:25:03.306002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.065 [2024-11-06 10:25:03.306009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.065 [2024-11-06 10:25:03.317676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.065 [2024-11-06 10:25:03.317694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.065 [2024-11-06 10:25:03.317701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.065 [2024-11-06 10:25:03.328294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.065 [2024-11-06 10:25:03.328313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.065 [2024-11-06 10:25:03.328320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.065 [2024-11-06 10:25:03.337295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.065 [2024-11-06 10:25:03.337314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.065 [2024-11-06 10:25:03.337321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.065 [2024-11-06 10:25:03.347386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.065 [2024-11-06 10:25:03.347405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.065 [2024-11-06 10:25:03.347412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.065 [2024-11-06 10:25:03.359103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.065 [2024-11-06 10:25:03.359123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.065 [2024-11-06 10:25:03.359129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.065 [2024-11-06 10:25:03.369963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.065 [2024-11-06 10:25:03.369982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.065 [2024-11-06 10:25:03.369989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.065 [2024-11-06 10:25:03.381046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.065 [2024-11-06 10:25:03.381065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.065 [2024-11-06 10:25:03.381072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.065 [2024-11-06 10:25:03.391900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.065 [2024-11-06 10:25:03.391919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.065 [2024-11-06 10:25:03.391933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.065 [2024-11-06 10:25:03.400167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.065 [2024-11-06 10:25:03.400187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.065 [2024-11-06 10:25:03.400194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.411674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.411694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.411701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.423806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.423825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.423832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.434111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.434130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.434137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.443734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.443753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.443760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.453881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.453899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.453907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.464467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.464486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.464493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.470854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.470878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.470884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.481826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.481849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.481856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.491682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.491701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.491708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.502947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.502966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.502973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.513814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.513833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.513840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.524068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.524088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.524094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.535851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.535875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.535881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.543761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.543781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.543788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.552519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.552539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.552546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.066 [2024-11-06 10:25:03.563300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.066 [2024-11-06 10:25:03.563319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.066 [2024-11-06 10:25:03.563326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.574233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.574253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.574259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.585084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.585104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.585110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.597083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.597102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.597108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.608482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.608502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.608509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.618485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.618503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.618510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.629924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.629943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.629950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.639312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.639331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.639338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.650809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.650828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.650834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.656710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.656730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.656740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.667679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.667699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.667705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.678890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.678909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.678916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.689152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.689172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.689179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.699425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.699444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.699451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.710701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.710720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.710727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.722384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.722404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.722411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.732779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.732798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.732805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.743554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.743573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.743580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.755624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.755647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.755653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.766949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.766968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.766975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.776900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.776919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.776925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.786892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.327 [2024-11-06 10:25:03.786911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.327 [2024-11-06 10:25:03.786917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.327 [2024-11-06 10:25:03.796495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.328 [2024-11-06 10:25:03.796514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.328 [2024-11-06 10:25:03.796521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.328 [2024-11-06 10:25:03.803842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.328 [2024-11-06 10:25:03.803867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.328 [2024-11-06 10:25:03.803874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.328 [2024-11-06 10:25:03.812557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.328 [2024-11-06 10:25:03.812576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.328 [2024-11-06 10:25:03.812583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.328 [2024-11-06 10:25:03.821990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.328 [2024-11-06 10:25:03.822008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.328 [2024-11-06 10:25:03.822014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.588 [2024-11-06 10:25:03.828023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.588 [2024-11-06 10:25:03.828042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.588 [2024-11-06 10:25:03.828048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.838964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.838983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.838989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.846293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.846311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.846318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.858354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.858372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.858379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.869838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.869856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.869868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.878554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.878573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.878580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.887122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.887140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.887147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.896654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.896672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.896679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.905192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.905210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.905218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.913313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.913335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.913341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.924900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.924918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.924925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.933750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.933768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.933775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.945733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.945752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.945758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.955168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.955186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.955193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.964143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.964161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.964167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.976591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.976609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.976615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.987552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.987570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.987576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:03.998343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:03.998362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:03.998368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:04.008154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:04.008172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:04.008178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:04.018976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:04.018994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:04.019001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:04.030368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:04.030387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:04.030394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:04.038855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:04.038878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:04.038884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:04.045367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:04.045385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:04.045391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:04.054173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:04.054191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:04.054197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:04.063629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:04.063647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:04.063653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:04.072152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:04.072170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:04.072176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:04.079217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:04.079235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-11-06 10:25:04.079244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.589 [2024-11-06 10:25:04.086634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.589 [2024-11-06 10:25:04.086652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.590 [2024-11-06 10:25:04.086659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.850 [2024-11-06 10:25:04.094146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.850 [2024-11-06 10:25:04.094164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.850 [2024-11-06 10:25:04.094171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.850 [2024-11-06 10:25:04.101686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.850 [2024-11-06 10:25:04.101704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.850 [2024-11-06 10:25:04.101711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.850 [2024-11-06 10:25:04.108699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fa50) 00:33:00.850 [2024-11-06 10:25:04.108717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.850 [2024-11-06 10:25:04.108724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.850 3038.00 IOPS, 379.75 MiB/s 00:33:00.850 Latency(us) 00:33:00.850 [2024-11-06T09:25:04.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.850 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:00.850 nvme0n1 : 2.00 3040.79 380.10 0.00 0.00 5259.71 1071.79 12724.91 00:33:00.850 [2024-11-06T09:25:04.351Z] =================================================================================================================== 00:33:00.850 [2024-11-06T09:25:04.351Z] Total : 3040.79 380.10 0.00 0.00 5259.71 1071.79 12724.91 00:33:00.850 { 00:33:00.850 "results": [ 00:33:00.850 { 00:33:00.850 "job": "nvme0n1", 00:33:00.850 "core_mask": "0x2", 00:33:00.850 "workload": "randread", 00:33:00.850 "status": "finished", 00:33:00.850 "queue_depth": 16, 00:33:00.850 "io_size": 131072, 00:33:00.850 "runtime": 2.00343, 00:33:00.850 "iops": 3040.7850536330193, 00:33:00.850 "mibps": 380.0981317041274, 00:33:00.850 "io_failed": 0, 00:33:00.850 "io_timeout": 0, 00:33:00.850 "avg_latency_us": 5259.713582840884, 00:33:00.850 "min_latency_us": 1071.7866666666666, 00:33:00.850 "max_latency_us": 12724.906666666666 00:33:00.850 } 00:33:00.850 ], 00:33:00.850 "core_count": 1 00:33:00.850 } 00:33:00.850 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:00.850 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:00.850 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:00.850 | .driver_specific 00:33:00.850 | .nvme_error 00:33:00.850 | .status_code 00:33:00.850 | .command_transient_transport_error' 00:33:00.850 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:00.850 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:33:00.850 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4083103 00:33:00.850 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4083103 ']' 00:33:00.850 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4083103 00:33:00.850 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:33:00.850 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:00.850 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4083103 00:33:01.110 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:01.110 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:01.110 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4083103' 00:33:01.110 killing process with pid 4083103 00:33:01.110 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4083103 00:33:01.110 Received shutdown signal, test time was about 2.000000 seconds 00:33:01.110 00:33:01.110 Latency(us) 00:33:01.110 [2024-11-06T09:25:04.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.110 [2024-11-06T09:25:04.611Z] =================================================================================================================== 00:33:01.110 [2024-11-06T09:25:04.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:01.110 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4083103 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4083765 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4083765 /var/tmp/bperf.sock 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4083765 ']' 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:01.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:01.111 10:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:01.111 [2024-11-06 10:25:04.541678] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:01.111 [2024-11-06 10:25:04.541733] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4083765 ] 00:33:01.371 [2024-11-06 10:25:04.632408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.371 [2024-11-06 10:25:04.661051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.940 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:01.940 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:33:01.940 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:01.940 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:02.200 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:02.200 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.200 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:02.200 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.200 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:02.200 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:02.461 nvme0n1 00:33:02.461 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:02.461 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.461 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:02.461 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.461 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:02.461 10:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:02.461 Running I/O for 2 seconds... 00:33:02.461 [2024-11-06 10:25:05.845638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e8088 00:33:02.461 [2024-11-06 10:25:05.847329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.461 [2024-11-06 10:25:05.847363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.461 [2024-11-06 10:25:05.856132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e7818 00:33:02.461 [2024-11-06 10:25:05.857134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.461 [2024-11-06 10:25:05.857157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:02.461 [2024-11-06 10:25:05.868189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fe720 00:33:02.461 [2024-11-06 10:25:05.869171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.461 [2024-11-06 10:25:05.869189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:02.461 [2024-11-06 10:25:05.880201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:02.461 [2024-11-06 10:25:05.881178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.461 [2024-11-06 10:25:05.881204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:02.461 [2024-11-06 10:25:05.892203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:02.461 [2024-11-06 10:25:05.893173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.461 [2024-11-06 10:25:05.893191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:02.461 [2024-11-06 10:25:05.904207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:02.461 [2024-11-06 10:25:05.905185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.461 [2024-11-06 10:25:05.905203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:02.461 [2024-11-06 10:25:05.916166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:02.461 [2024-11-06 10:25:05.917166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.461 [2024-11-06 10:25:05.917187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:02.461 [2024-11-06 10:25:05.928133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:02.461 [2024-11-06 10:25:05.929123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.461 [2024-11-06 10:25:05.929140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:02.461 [2024-11-06 10:25:05.940163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:02.461 [2024-11-06 10:25:05.941137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.461 [2024-11-06 10:25:05.941158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:02.461 [2024-11-06 10:25:05.952146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:02.461 [2024-11-06 10:25:05.953096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.461 [2024-11-06 10:25:05.953115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:02.721 [2024-11-06 10:25:05.964114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:02.721 [2024-11-06 10:25:05.965102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.721 [2024-11-06 10:25:05.965119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:05.975285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e1710 00:33:02.722 [2024-11-06 10:25:05.976261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:05.976280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:05.987956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e6b70 00:33:02.722 [2024-11-06 10:25:05.988930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:05.988947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:05.999151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fdeb0 00:33:02.722 [2024-11-06 10:25:06.000104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.000121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.011877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fdeb0 00:33:02.722 [2024-11-06 10:25:06.012834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.012851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.023822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fdeb0 00:33:02.722 [2024-11-06 10:25:06.024780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.024797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.035791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fdeb0 00:33:02.722 [2024-11-06 10:25:06.036777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.036797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.047741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fdeb0 00:33:02.722 [2024-11-06 10:25:06.048690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.048708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.059671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fdeb0 00:33:02.722 [2024-11-06 10:25:06.060635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.060656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.071640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fdeb0 00:33:02.722 [2024-11-06 10:25:06.072612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.072632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.083591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fdeb0 00:33:02.722 [2024-11-06 10:25:06.084560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.084580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.097072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fdeb0 00:33:02.722 [2024-11-06 10:25:06.098683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.098702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.107455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f1ca0 00:33:02.722 [2024-11-06 10:25:06.108401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.108418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.119385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f1ca0 00:33:02.722 [2024-11-06 10:25:06.120339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.120360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.131361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f1ca0 00:33:02.722 [2024-11-06 10:25:06.132309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.132326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.143313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f1ca0 00:33:02.722 [2024-11-06 10:25:06.144256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.144273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.155258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f1ca0 00:33:02.722 [2024-11-06 10:25:06.156207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.156225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.167236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f1ca0 00:33:02.722 [2024-11-06 10:25:06.168185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.168204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.179178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f1ca0 00:33:02.722 [2024-11-06 10:25:06.180134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.180154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.190319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e27f0 00:33:02.722 [2024-11-06 10:25:06.191253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.191276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.203048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e27f0 00:33:02.722 [2024-11-06 10:25:06.203981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.203998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:02.722 [2024-11-06 10:25:06.214189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f2510 00:33:02.722 [2024-11-06 10:25:06.215104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.722 [2024-11-06 10:25:06.215121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.226899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f2510 00:33:02.984 [2024-11-06 10:25:06.227828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.227847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.238048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e3060 00:33:02.984 [2024-11-06 10:25:06.238949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.238965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.250712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f2d80 00:33:02.984 [2024-11-06 10:25:06.251585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.251602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.262682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f2d80 00:33:02.984 [2024-11-06 10:25:06.263592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.263612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.274640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166dfdc0 00:33:02.984 [2024-11-06 10:25:06.275577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.275594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.286628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e38d0 00:33:02.984 [2024-11-06 10:25:06.287539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.287556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.298631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166de8a8 00:33:02.984 [2024-11-06 10:25:06.299502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.299519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.310570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e4de8 00:33:02.984 [2024-11-06 10:25:06.311477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.311496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.322524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e4de8 00:33:02.984 [2024-11-06 10:25:06.323418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.323437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.334482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e4de8 00:33:02.984 [2024-11-06 10:25:06.335381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.335401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.346420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e4de8 00:33:02.984 [2024-11-06 10:25:06.347316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.347333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.358372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e1710 00:33:02.984 [2024-11-06 10:25:06.359290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.359307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.369552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f35f0 00:33:02.984 [2024-11-06 10:25:06.370435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.370455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.384401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166df988 00:33:02.984 [2024-11-06 10:25:06.386076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.386093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.394909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f6458 00:33:02.984 [2024-11-06 10:25:06.395932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.395949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.408451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e6b70 00:33:02.984 [2024-11-06 10:25:06.410156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.410175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.418116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e01f8 00:33:02.984 [2024-11-06 10:25:06.419127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.419144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.430867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f6458 00:33:02.984 [2024-11-06 10:25:06.431919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.431936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.442814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e6fa8 00:33:02.984 [2024-11-06 10:25:06.443848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.443867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.454820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e5ec8 00:33:02.984 [2024-11-06 10:25:06.455856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.455879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.466932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e4de8 00:33:02.984 [2024-11-06 10:25:06.467931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.467948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:02.984 [2024-11-06 10:25:06.480620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f6458 00:33:02.984 [2024-11-06 10:25:06.482312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.984 [2024-11-06 10:25:06.482332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.246 [2024-11-06 10:25:06.491045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:03.246 [2024-11-06 10:25:06.492050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.246 [2024-11-06 10:25:06.492068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.246 [2024-11-06 10:25:06.502992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:03.246 [2024-11-06 10:25:06.504031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.246 [2024-11-06 10:25:06.504056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.246 [2024-11-06 10:25:06.514921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:03.246 [2024-11-06 10:25:06.515934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.246 [2024-11-06 10:25:06.515955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.246 [2024-11-06 10:25:06.526905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:03.246 [2024-11-06 10:25:06.527933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.246 [2024-11-06 10:25:06.527952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.246 [2024-11-06 10:25:06.538875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:03.247 [2024-11-06 10:25:06.539916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.539935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.550837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:03.247 [2024-11-06 10:25:06.551866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.551884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.562799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:03.247 [2024-11-06 10:25:06.563844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.563864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.574748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:03.247 [2024-11-06 10:25:06.575792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.575811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.586723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:03.247 [2024-11-06 10:25:06.587752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.587769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.600212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:03.247 [2024-11-06 10:25:06.601900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.601917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.610600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f6cc8 00:33:03.247 [2024-11-06 10:25:06.611634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.611653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.622600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f6cc8 00:33:03.247 [2024-11-06 10:25:06.623617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.623635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.634531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f6cc8 00:33:03.247 [2024-11-06 10:25:06.635559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.635578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.648007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f6cc8 00:33:03.247 [2024-11-06 10:25:06.649670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.649690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.658443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fc560 00:33:03.247 [2024-11-06 10:25:06.659475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.659496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.670414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fc560 00:33:03.247 [2024-11-06 10:25:06.671430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.671449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.682404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fc560 00:33:03.247 [2024-11-06 10:25:06.683420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.683439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.694356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fc560 00:33:03.247 [2024-11-06 10:25:06.695345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.695363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.706293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fc560 00:33:03.247 [2024-11-06 10:25:06.707318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.707335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.718370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fc560 00:33:03.247 [2024-11-06 10:25:06.719381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.719401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.730333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fc560 00:33:03.247 [2024-11-06 10:25:06.731354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.731375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.247 [2024-11-06 10:25:06.741479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eff18 00:33:03.247 [2024-11-06 10:25:06.742477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.247 [2024-11-06 10:25:06.742496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.754169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fbcf0 00:33:03.508 [2024-11-06 10:25:06.755129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.755149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.765318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f5be8 00:33:03.508 [2024-11-06 10:25:06.766293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.766313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.778079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f5be8 00:33:03.508 [2024-11-06 10:25:06.779035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.779054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.790056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f5be8 00:33:03.508 [2024-11-06 10:25:06.791047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.791067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.802036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e6b70 00:33:03.508 [2024-11-06 10:25:06.803029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.803048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.814025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e7c50 00:33:03.508 [2024-11-06 10:25:06.815011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.815035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.825221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f81e0 00:33:03.508 [2024-11-06 10:25:06.826181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.826198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:03.508 21240.00 IOPS, 82.97 MiB/s [2024-11-06T09:25:07.009Z] [2024-11-06 10:25:06.838871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:03.508 [2024-11-06 10:25:06.839956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.839976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.852600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166ef270 00:33:03.508 [2024-11-06 10:25:06.854377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.854395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.862995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e8088 00:33:03.508 [2024-11-06 10:25:06.864114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.864131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.874908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:03.508 [2024-11-06 10:25:06.876009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.876026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.886911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:03.508 [2024-11-06 10:25:06.887986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.888003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.898857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fc998 00:33:03.508 [2024-11-06 10:25:06.899931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.899948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.910847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fda78 00:33:03.508 [2024-11-06 10:25:06.911948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.911969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.922856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e7818 00:33:03.508 [2024-11-06 10:25:06.923931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.508 [2024-11-06 10:25:06.923949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:03.508 [2024-11-06 10:25:06.934798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:03.508 [2024-11-06 10:25:06.935892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.509 [2024-11-06 10:25:06.935913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.509 [2024-11-06 10:25:06.946804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166efae0 00:33:03.509 [2024-11-06 10:25:06.947875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.509 [2024-11-06 10:25:06.947893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.509 [2024-11-06 10:25:06.960367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f3a28 00:33:03.509 [2024-11-06 10:25:06.962080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.509 [2024-11-06 10:25:06.962098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:03.509 [2024-11-06 10:25:06.970812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e6fa8 00:33:03.509 [2024-11-06 10:25:06.971919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.509 [2024-11-06 10:25:06.971938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.509 [2024-11-06 10:25:06.982817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e5ec8 00:33:03.509 [2024-11-06 10:25:06.983924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.509 [2024-11-06 10:25:06.983944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.509 [2024-11-06 10:25:06.994804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e4de8 00:33:03.509 [2024-11-06 10:25:06.995913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.509 [2024-11-06 10:25:06.995933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:03.509 [2024-11-06 10:25:07.008234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e27f0 00:33:03.770 [2024-11-06 10:25:07.009936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.009955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.018636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f46d0 00:33:03.770 [2024-11-06 10:25:07.019720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.019741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.030583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f46d0 00:33:03.770 [2024-11-06 10:25:07.031656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.031673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.042556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f46d0 00:33:03.770 [2024-11-06 10:25:07.043640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.043662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.054529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f46d0 00:33:03.770 [2024-11-06 10:25:07.055602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.055620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.066481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f46d0 00:33:03.770 [2024-11-06 10:25:07.067554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.067575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.078466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f46d0 00:33:03.770 [2024-11-06 10:25:07.079545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.079563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.090420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f46d0 00:33:03.770 [2024-11-06 10:25:07.091460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.091480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.103936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e01f8 00:33:03.770 [2024-11-06 10:25:07.105648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.105665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.114343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eaab8 00:33:03.770 [2024-11-06 10:25:07.115372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.115388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.126249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166ef270 00:33:03.770 [2024-11-06 10:25:07.127305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.127327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.138216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166ef270 00:33:03.770 [2024-11-06 10:25:07.139264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.139281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.150135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:03.770 [2024-11-06 10:25:07.151180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.151198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.162095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:03.770 [2024-11-06 10:25:07.163138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.163159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.174071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:03.770 [2024-11-06 10:25:07.175126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.175143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.187573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0bc0 00:33:03.770 [2024-11-06 10:25:07.189262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.189283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.197996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:03.770 [2024-11-06 10:25:07.199031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.199049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.209960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:03.770 [2024-11-06 10:25:07.211009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.211026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.221909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:03.770 [2024-11-06 10:25:07.222928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.222944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.233867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:03.770 [2024-11-06 10:25:07.234902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.234922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.245833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:03.770 [2024-11-06 10:25:07.246888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.246905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.257794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:03.770 [2024-11-06 10:25:07.258832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:03.770 [2024-11-06 10:25:07.258852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:03.770 [2024-11-06 10:25:07.269765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:04.032 [2024-11-06 10:25:07.270777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.270797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.281714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:04.032 [2024-11-06 10:25:07.282750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.282768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.293655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:04.032 [2024-11-06 10:25:07.294683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.294699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.305623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:04.032 [2024-11-06 10:25:07.306663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.306682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.317560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:04.032 [2024-11-06 10:25:07.318573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.318590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.329506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:04.032 [2024-11-06 10:25:07.330540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.330559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.341450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:04.032 [2024-11-06 10:25:07.342466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.342483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.353375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166eea00 00:33:04.032 [2024-11-06 10:25:07.354414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.354434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.365267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0350 00:33:04.032 [2024-11-06 10:25:07.366310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.366327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.377220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0350 00:33:04.032 [2024-11-06 10:25:07.378246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.378265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.389160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0350 00:33:04.032 [2024-11-06 10:25:07.390184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.390203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.401112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0350 00:33:04.032 [2024-11-06 10:25:07.402142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.402161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.413054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0350 00:33:04.032 [2024-11-06 10:25:07.414044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.414062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.424985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166df118 00:33:04.032 [2024-11-06 10:25:07.425987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.426008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.436932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e84c0 00:33:04.032 [2024-11-06 10:25:07.437932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.437952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.448869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166df118 00:33:04.032 [2024-11-06 10:25:07.449844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.449861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.460847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0350 00:33:04.032 [2024-11-06 10:25:07.461848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.461870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.474477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f4b08 00:33:04.032 [2024-11-06 10:25:07.476145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.032 [2024-11-06 10:25:07.476164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:04.032 [2024-11-06 10:25:07.485065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166ed4e8 00:33:04.032 [2024-11-06 10:25:07.486046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.033 [2024-11-06 10:25:07.486062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:04.033 [2024-11-06 10:25:07.497016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166ed4e8 00:33:04.033 [2024-11-06 10:25:07.498014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.033 [2024-11-06 10:25:07.498031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:04.033 [2024-11-06 10:25:07.508968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166ed4e8 00:33:04.033 [2024-11-06 10:25:07.509929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.033 [2024-11-06 10:25:07.509946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:04.033 [2024-11-06 10:25:07.522445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fe2e8 00:33:04.033 [2024-11-06 10:25:07.524066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.033 [2024-11-06 10:25:07.524082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.532896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166dece0 00:33:04.294 [2024-11-06 10:25:07.533890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.533910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.544837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f0788 00:33:04.294 [2024-11-06 10:25:07.545817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.545833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.558357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e99d8 00:33:04.294 [2024-11-06 10:25:07.560009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.560026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.568781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166edd58 00:33:04.294 [2024-11-06 10:25:07.569780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.569801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.580745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166edd58 00:33:04.294 [2024-11-06 10:25:07.581699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.581715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.592091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f2948 00:33:04.294 [2024-11-06 10:25:07.593038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.593054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.605021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fef90 00:33:04.294 [2024-11-06 10:25:07.606139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.606157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.616711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e8088 00:33:04.294 [2024-11-06 10:25:07.617846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.617867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.629800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e8088 00:33:04.294 [2024-11-06 10:25:07.631246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.631266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.642951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f7538 00:33:04.294 [2024-11-06 10:25:07.644715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.644732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.652554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f1430 00:33:04.294 [2024-11-06 10:25:07.653694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.653715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.665625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f1430 00:33:04.294 [2024-11-06 10:25:07.667053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.667072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.678802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fcdd0 00:33:04.294 [2024-11-06 10:25:07.680599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.680618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.688433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f7100 00:33:04.294 [2024-11-06 10:25:07.689573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.689593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.701503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166f7100 00:33:04.294 [2024-11-06 10:25:07.702916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.702933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.714657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e88f8 00:33:04.294 [2024-11-06 10:25:07.716408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.716425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.724282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fd208 00:33:04.294 [2024-11-06 10:25:07.725422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.725440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:04.294 [2024-11-06 10:25:07.737333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fd208 00:33:04.294 [2024-11-06 10:25:07.738771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.294 [2024-11-06 10:25:07.738791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:04.295 [2024-11-06 10:25:07.750456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166fdeb0 00:33:04.295 [2024-11-06 10:25:07.752213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.295 [2024-11-06 10:25:07.752233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:04.295 [2024-11-06 10:25:07.760750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e84c0 00:33:04.295 [2024-11-06 10:25:07.762140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.295 [2024-11-06 10:25:07.762159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:04.295 [2024-11-06 10:25:07.773452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e84c0 00:33:04.295 [2024-11-06 10:25:07.774888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.295 [2024-11-06 10:25:07.774907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.295 [2024-11-06 10:25:07.785396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e84c0 00:33:04.295 [2024-11-06 10:25:07.786829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.295 [2024-11-06 10:25:07.786849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.556 [2024-11-06 10:25:07.797375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e84c0 00:33:04.556 [2024-11-06 10:25:07.798809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.556 [2024-11-06 10:25:07.798830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.556 [2024-11-06 10:25:07.809296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e84c0 00:33:04.557 [2024-11-06 10:25:07.810733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.557 [2024-11-06 10:25:07.810752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.557 [2024-11-06 10:25:07.821264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e84c0 00:33:04.557 [2024-11-06 10:25:07.822691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.557 [2024-11-06 10:25:07.822708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.557 [2024-11-06 10:25:07.833231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccd9d0) with pdu=0x2000166e84c0 00:33:04.557 [2024-11-06 10:25:07.834659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.557 [2024-11-06 10:25:07.834678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:04.557 21320.50 IOPS, 83.28 MiB/s 00:33:04.557 Latency(us) 00:33:04.557 [2024-11-06T09:25:08.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.557 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:04.557 nvme0n1 : 2.01 21324.67 83.30 0.00 0.00 5994.32 1952.43 14199.47 00:33:04.557 [2024-11-06T09:25:08.058Z] =================================================================================================================== 00:33:04.557 [2024-11-06T09:25:08.058Z] Total : 21324.67 83.30 0.00 0.00 5994.32 1952.43 14199.47 00:33:04.557 { 00:33:04.557 "results": [ 00:33:04.557 { 00:33:04.557 "job": "nvme0n1", 00:33:04.557 "core_mask": "0x2", 00:33:04.557 "workload": "randwrite", 00:33:04.557 "status": "finished", 00:33:04.557 "queue_depth": 128, 00:33:04.557 "io_size": 4096, 00:33:04.557 "runtime": 2.005611, 00:33:04.557 "iops": 21324.673628136265, 00:33:04.557 "mibps": 83.29950635990728, 00:33:04.557 "io_failed": 0, 00:33:04.557 "io_timeout": 0, 00:33:04.557 "avg_latency_us": 5994.324587123072, 00:33:04.557 "min_latency_us": 1952.4266666666667, 00:33:04.557 "max_latency_us": 14199.466666666667 00:33:04.557 } 00:33:04.557 ], 00:33:04.557 "core_count": 1 00:33:04.557 } 00:33:04.557 10:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:04.557 10:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:04.557 10:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:04.557 | .driver_specific 00:33:04.557 | .nvme_error 00:33:04.557 | .status_code 00:33:04.557 | .command_transient_transport_error' 00:33:04.557 10:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:04.557 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:33:04.557 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4083765 00:33:04.557 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4083765 ']' 00:33:04.557 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4083765 00:33:04.557 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:33:04.557 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:04.557 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4083765 00:33:04.818 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:04.818 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:04.818 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4083765' 00:33:04.818 killing process with pid 4083765 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4083765 00:33:04.819 Received shutdown signal, test time was about 2.000000 seconds 00:33:04.819 00:33:04.819 Latency(us) 00:33:04.819 [2024-11-06T09:25:08.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.819 [2024-11-06T09:25:08.320Z] =================================================================================================================== 00:33:04.819 [2024-11-06T09:25:08.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4083765 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4084446 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4084446 /var/tmp/bperf.sock 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4084446 ']' 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:04.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:04.819 10:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.819 [2024-11-06 10:25:08.258589] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:04.819 [2024-11-06 10:25:08.258648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084446 ] 00:33:04.819 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:04.819 Zero copy mechanism will not be used. 00:33:05.080 [2024-11-06 10:25:08.355249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.080 [2024-11-06 10:25:08.384541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.651 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:05.651 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:33:05.651 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:05.651 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:05.911 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:05.911 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.911 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.911 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.911 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.911 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:06.172 nvme0n1 00:33:06.172 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:06.172 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.172 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:06.172 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.172 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:06.172 10:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:06.432 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:06.432 Zero copy mechanism will not be used. 00:33:06.432 Running I/O for 2 seconds... 00:33:06.432 [2024-11-06 10:25:09.691128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.691480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.691509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.698242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.698458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.698478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.706437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.706778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.706798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.712850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.713076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.713093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.717889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.718096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.718113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.726575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.726926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.726945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.735839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.736181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.736199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.743267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.743472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.743488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.751873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.752252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.752270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.761778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.762038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.762056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.772832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.773199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.773217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.784326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.784633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.784651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.796183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.796405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.796422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.807907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.808271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.808288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.819270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.819604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.819621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.830919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.831105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.831122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.842519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.842860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.842882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.854238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.854611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.854632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.865708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.865972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.865988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.875563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.875870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.875889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.886564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.886930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.886948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.897916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.898238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.898255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.909253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.909585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.909603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.919994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.920337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.920355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.432 [2024-11-06 10:25:09.931103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.432 [2024-11-06 10:25:09.931434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.432 [2024-11-06 10:25:09.931452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:09.939349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:09.939560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:09.939577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:09.950776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:09.951087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:09.951106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:09.962159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:09.962370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:09.962387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:09.973701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:09.974065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:09.974084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:09.984935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:09.985288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:09.985306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:09.996738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:09.997079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:09.997097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.009058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.009393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.009412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.020129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.020400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.020417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.032145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.032535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.032553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.043833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.044158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.044175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.054520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.054871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.054890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.066319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.066618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.066636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.078286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.078635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.078654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.089767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.090027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.090043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.101655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.101892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.101911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.112117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.112336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.112353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.122830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.123303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.123321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.133795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.134041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.134059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.144746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.145080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.145105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.156195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.156629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.156647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.167060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.167273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.694 [2024-11-06 10:25:10.167290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.694 [2024-11-06 10:25:10.178203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.694 [2024-11-06 10:25:10.178439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.695 [2024-11-06 10:25:10.178456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.695 [2024-11-06 10:25:10.188739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.695 [2024-11-06 10:25:10.189115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.695 [2024-11-06 10:25:10.189133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.200200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.200463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.200481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.211007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.211234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.211251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.221990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.222250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.222269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.232719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.232999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.233016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.243450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.243791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.243809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.253883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.254124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.254142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.264562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.264794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.264811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.275425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.275627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.275645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.286098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.286327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.286344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.297106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.297313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.297330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.307314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.307640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.307658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.317455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.317727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.317746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.327171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.327525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.327547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.334332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.334615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.334633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.343210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.343433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.343450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.350837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.351172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.351190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.956 [2024-11-06 10:25:10.359441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.956 [2024-11-06 10:25:10.359709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.956 [2024-11-06 10:25:10.359727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.366409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.366648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.366665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.373733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.373959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.373976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.381586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.381798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.381815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.390459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.390549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.390565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.397044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.397428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.397446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.403564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.403884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.403902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.412989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.413203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.413220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.421395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.421730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.421747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.428836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.429127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.429144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.437323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.437691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.437709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.446259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.446598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.446615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.957 [2024-11-06 10:25:10.455097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:06.957 [2024-11-06 10:25:10.455423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.957 [2024-11-06 10:25:10.455441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.463421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.463632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.463650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.473155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.473451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.473469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.481757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.481986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.482003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.490831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.491151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.491169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.500805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.501156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.501176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.510982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.511244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.511262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.522023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.522255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.522272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.533465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.533907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.533925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.543854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.544206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.544224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.554645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.554936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.554960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.565290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.565503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.565520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.575883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.576200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.576218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.584671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.584873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.584890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.593379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.593700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.218 [2024-11-06 10:25:10.593718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.218 [2024-11-06 10:25:10.602148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.218 [2024-11-06 10:25:10.602352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.602369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.610897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.611202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.611221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.619805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.619997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.620014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.628902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.629226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.629244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.637703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.637926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.637943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.646134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.646372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.646388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.653732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.654050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.654067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.662202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.662538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.662556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.671462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.671761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.671779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.219 3124.00 IOPS, 390.50 MiB/s [2024-11-06T09:25:10.720Z] [2024-11-06 10:25:10.681330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.681559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.681576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.690344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.690588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.690606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.698450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.698535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.698551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.706725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.706975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.706993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.219 [2024-11-06 10:25:10.714615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.219 [2024-11-06 10:25:10.714725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.219 [2024-11-06 10:25:10.714743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.480 [2024-11-06 10:25:10.722757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.722997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.723014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.729678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.729947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.729965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.736159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.736233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.736250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.743913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.744131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.744150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.751122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.751366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.751384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.757163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.757414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.757430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.765417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.765603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.765620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.774567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.774760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.774776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.781124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.781199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.781215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.787043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.787143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.787160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.796609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.796890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.796907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.804795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.804996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.805015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.812672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.812876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.812894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.821874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.822126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.822143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.830712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.831011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.831031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.836941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.837205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.837221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.845846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.846148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.846166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.852515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.852598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.852614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.860087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.860282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.860299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.869165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.869429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.869445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.875279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.875543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.875561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.882939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.883220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.883237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.891120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.891209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.891225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.897461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.897550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.897565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.904573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.904670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.904705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.910206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.910307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.910324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.481 [2024-11-06 10:25:10.916830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.481 [2024-11-06 10:25:10.917004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.481 [2024-11-06 10:25:10.917020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.482 [2024-11-06 10:25:10.923843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.482 [2024-11-06 10:25:10.924121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.482 [2024-11-06 10:25:10.924138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.482 [2024-11-06 10:25:10.931898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.482 [2024-11-06 10:25:10.932155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.482 [2024-11-06 10:25:10.932172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.482 [2024-11-06 10:25:10.938686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.482 [2024-11-06 10:25:10.938849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.482 [2024-11-06 10:25:10.938869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.482 [2024-11-06 10:25:10.946913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.482 [2024-11-06 10:25:10.947020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.482 [2024-11-06 10:25:10.947036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.482 [2024-11-06 10:25:10.954485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.482 [2024-11-06 10:25:10.954590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.482 [2024-11-06 10:25:10.954606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.482 [2024-11-06 10:25:10.962936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.482 [2024-11-06 10:25:10.963131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.482 [2024-11-06 10:25:10.963148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.482 [2024-11-06 10:25:10.969723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.482 [2024-11-06 10:25:10.969825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.482 [2024-11-06 10:25:10.969842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.482 [2024-11-06 10:25:10.976488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.482 [2024-11-06 10:25:10.976565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.482 [2024-11-06 10:25:10.976580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.482 [2024-11-06 10:25:10.979983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.482 [2024-11-06 10:25:10.980068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.482 [2024-11-06 10:25:10.980083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.743 [2024-11-06 10:25:10.983534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.743 [2024-11-06 10:25:10.983610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.743 [2024-11-06 10:25:10.983626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.743 [2024-11-06 10:25:10.988886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.743 [2024-11-06 10:25:10.988968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.743 [2024-11-06 10:25:10.988983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.743 [2024-11-06 10:25:10.992346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.743 [2024-11-06 10:25:10.992423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.743 [2024-11-06 10:25:10.992443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.743 [2024-11-06 10:25:10.997466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.743 [2024-11-06 10:25:10.997542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.743 [2024-11-06 10:25:10.997566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.743 [2024-11-06 10:25:11.002219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.002293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.002309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.006975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.007113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.007129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.012956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.013221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.013238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.020579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.020689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.020706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.025367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.025447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.025463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.034797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.034896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.034913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.044587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.044807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.044823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.053812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.054046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.054062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.064329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.064610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.064628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.073790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.074041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.074058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.083849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.084135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.084157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.093584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.093768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.093787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.103463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.103686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.103702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.113453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.113727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.113745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.124076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.124356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.124373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.133888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.133989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.134005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.139016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.139114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.139130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.143402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.143490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.143505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.151521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.151615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.151630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.157848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.158080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.158095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.168763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.168988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.169004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.178919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.179173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.179190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.188995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.189287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.189304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.198199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.198495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.198512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.208727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.209065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.209083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.218048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.218396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.218413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.228200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.744 [2024-11-06 10:25:11.228400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.744 [2024-11-06 10:25:11.228416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.744 [2024-11-06 10:25:11.238922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:07.745 [2024-11-06 10:25:11.239168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.745 [2024-11-06 10:25:11.239185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.248834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.249120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.249137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.259222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.259504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.259521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.269388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.269530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.269546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.279714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.279911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.279927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.289361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.289609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.289625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.299164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.299432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.299450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.309006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.309199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.309214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.319267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.319421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.319437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.328683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.328782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.328805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.334604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.334681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.334697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.339572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.339856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.339879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.344365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.344446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.344461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.347823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.347901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.347917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.351568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.351641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.351662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.356317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.356396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.006 [2024-11-06 10:25:11.356412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.006 [2024-11-06 10:25:11.359836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.006 [2024-11-06 10:25:11.359942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.359958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.365079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.365277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.365293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.373899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.374002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.374018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.381087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.381308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.381325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.390406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.390572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.390588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.398254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.398529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.398547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.405577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.405652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.405668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.411072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.411147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.411163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.417210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.417503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.417520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.425614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.425914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.425931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.432690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.432763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.432784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.438984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.439060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.439076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.445192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.445412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.445428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.454868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.455138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.455154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.463967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.464199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.464215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.471644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.471718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.471735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.480580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.480785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.480801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.486596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.486671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.486687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.495077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.495320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.495336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.007 [2024-11-06 10:25:11.503793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.007 [2024-11-06 10:25:11.504064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.007 [2024-11-06 10:25:11.504081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.512859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.512956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.512972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.521994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.522078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.522094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.529876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.530013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.530028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.538111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.538395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.538412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.546462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.546662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.546678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.554840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.554952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.554968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.562247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.562411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.562427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.572046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.572141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.572158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.578198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.578434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.578452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.585295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.585380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.585396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.590799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.591051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.591069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.599770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.599876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.599892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.608395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.608597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.608613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.614205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.614287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.614302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.618651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.618727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.618743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.622100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.622176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.622192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.625568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.625655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.625675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.629033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.629113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.629128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.634297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.269 [2024-11-06 10:25:11.634549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.269 [2024-11-06 10:25:11.634567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.269 [2024-11-06 10:25:11.641287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.270 [2024-11-06 10:25:11.641376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.270 [2024-11-06 10:25:11.641392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.270 [2024-11-06 10:25:11.647293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.270 [2024-11-06 10:25:11.647528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.270 [2024-11-06 10:25:11.647544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.270 [2024-11-06 10:25:11.652844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.270 [2024-11-06 10:25:11.653035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.270 [2024-11-06 10:25:11.653051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.270 [2024-11-06 10:25:11.659326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.270 [2024-11-06 10:25:11.659400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.270 [2024-11-06 10:25:11.659416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.270 [2024-11-06 10:25:11.662982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.270 [2024-11-06 10:25:11.663057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.270 [2024-11-06 10:25:11.663074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.270 [2024-11-06 10:25:11.667366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.270 [2024-11-06 10:25:11.667448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.270 [2024-11-06 10:25:11.667464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.270 [2024-11-06 10:25:11.673394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.270 [2024-11-06 10:25:11.673486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.270 [2024-11-06 10:25:11.673510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.270 [2024-11-06 10:25:11.678869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.270 [2024-11-06 10:25:11.678950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.270 [2024-11-06 10:25:11.678965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.270 3652.50 IOPS, 456.56 MiB/s [2024-11-06T09:25:11.771Z] [2024-11-06 10:25:11.684777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ccdd10) with pdu=0x2000166fef90 00:33:08.270 [2024-11-06 10:25:11.685072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.270 [2024-11-06 10:25:11.685089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.270 00:33:08.270 Latency(us) 00:33:08.270 [2024-11-06T09:25:11.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.270 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:08.270 nvme0n1 : 2.01 3651.31 456.41 0.00 0.00 4374.72 1631.57 12506.45 00:33:08.270 [2024-11-06T09:25:11.771Z] =================================================================================================================== 00:33:08.270 [2024-11-06T09:25:11.771Z] Total : 3651.31 456.41 0.00 0.00 4374.72 1631.57 12506.45 00:33:08.270 { 00:33:08.270 "results": [ 00:33:08.270 { 00:33:08.270 "job": "nvme0n1", 00:33:08.270 "core_mask": "0x2", 00:33:08.270 "workload": "randwrite", 00:33:08.270 "status": "finished", 00:33:08.270 "queue_depth": 16, 00:33:08.270 "io_size": 131072, 00:33:08.270 "runtime": 2.005853, 00:33:08.270 "iops": 3651.3144283255056, 00:33:08.270 "mibps": 456.4143035406882, 00:33:08.270 "io_failed": 0, 00:33:08.270 "io_timeout": 0, 00:33:08.270 "avg_latency_us": 4374.719359184416, 00:33:08.270 "min_latency_us": 1631.5733333333333, 00:33:08.270 "max_latency_us": 12506.453333333333 00:33:08.270 } 00:33:08.270 ], 00:33:08.270 "core_count": 1 00:33:08.270 } 00:33:08.270 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:08.270 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:08.270 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:08.270 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:08.270 | .driver_specific 00:33:08.270 | .nvme_error 00:33:08.270 | .status_code 00:33:08.270 | .command_transient_transport_error' 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4084446 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4084446 ']' 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4084446 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4084446 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4084446' 00:33:08.531 killing process with pid 4084446 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4084446 00:33:08.531 Received shutdown signal, test time was about 2.000000 seconds 00:33:08.531 00:33:08.531 Latency(us) 00:33:08.531 [2024-11-06T09:25:12.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.531 [2024-11-06T09:25:12.032Z] =================================================================================================================== 00:33:08.531 [2024-11-06T09:25:12.032Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:08.531 10:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4084446 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4082221 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4082221 ']' 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4082221 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4082221 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4082221' 00:33:08.791 killing process with pid 4082221 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4082221 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4082221 00:33:08.791 00:33:08.791 real 0m15.837s 00:33:08.791 user 0m31.303s 00:33:08.791 sys 0m3.465s 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.791 ************************************ 00:33:08.791 END TEST nvmf_digest_error 00:33:08.791 ************************************ 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:08.791 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:08.791 rmmod nvme_tcp 00:33:09.055 rmmod nvme_fabrics 00:33:09.055 rmmod nvme_keyring 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 4082221 ']' 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 4082221 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 4082221 ']' 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 4082221 00:33:09.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (4082221) - No such process 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 4082221 is not found' 00:33:09.055 Process with pid 4082221 is not found 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.055 10:25:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.965 10:25:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:10.965 00:33:10.965 real 0m43.468s 00:33:10.965 user 1m6.810s 00:33:10.965 sys 0m13.351s 00:33:10.965 10:25:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:10.965 10:25:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:10.965 ************************************ 00:33:10.965 END TEST nvmf_digest 00:33:10.965 ************************************ 00:33:11.225 10:25:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:11.225 10:25:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:11.225 10:25:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:11.225 10:25:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:11.225 10:25:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:11.225 10:25:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:11.225 10:25:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.225 ************************************ 00:33:11.225 START TEST nvmf_bdevperf 00:33:11.225 ************************************ 00:33:11.225 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:11.225 * Looking for test storage... 00:33:11.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:11.225 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:11.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.226 --rc genhtml_branch_coverage=1 00:33:11.226 --rc genhtml_function_coverage=1 00:33:11.226 --rc genhtml_legend=1 00:33:11.226 --rc geninfo_all_blocks=1 00:33:11.226 --rc geninfo_unexecuted_blocks=1 00:33:11.226 00:33:11.226 ' 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:11.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.226 --rc genhtml_branch_coverage=1 00:33:11.226 --rc genhtml_function_coverage=1 00:33:11.226 --rc genhtml_legend=1 00:33:11.226 --rc geninfo_all_blocks=1 00:33:11.226 --rc geninfo_unexecuted_blocks=1 00:33:11.226 00:33:11.226 ' 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:11.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.226 --rc genhtml_branch_coverage=1 00:33:11.226 --rc genhtml_function_coverage=1 00:33:11.226 --rc genhtml_legend=1 00:33:11.226 --rc geninfo_all_blocks=1 00:33:11.226 --rc geninfo_unexecuted_blocks=1 00:33:11.226 00:33:11.226 ' 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:11.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.226 --rc genhtml_branch_coverage=1 00:33:11.226 --rc genhtml_function_coverage=1 00:33:11.226 --rc genhtml_legend=1 00:33:11.226 --rc geninfo_all_blocks=1 00:33:11.226 --rc geninfo_unexecuted_blocks=1 00:33:11.226 00:33:11.226 ' 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.226 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:11.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:11.487 10:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:19.628 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:19.628 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.628 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:19.629 Found net devices under 0000:31:00.0: cvl_0_0 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:19.629 Found net devices under 0000:31:00.1: cvl_0_1 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:19.629 10:25:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:19.629 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:19.629 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:19.629 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:19.629 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:19.890 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:19.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:19.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:33:19.891 00:33:19.891 --- 10.0.0.2 ping statistics --- 00:33:19.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.891 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:19.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:19.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:33:19.891 00:33:19.891 --- 10.0.0.1 ping statistics --- 00:33:19.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.891 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4090063 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4090063 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 4090063 ']' 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:19.891 10:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:19.891 [2024-11-06 10:25:23.351161] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:19.891 [2024-11-06 10:25:23.351228] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.151 [2024-11-06 10:25:23.459412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:20.151 [2024-11-06 10:25:23.511475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.151 [2024-11-06 10:25:23.511528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.151 [2024-11-06 10:25:23.511537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.151 [2024-11-06 10:25:23.511544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.151 [2024-11-06 10:25:23.511550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.151 [2024-11-06 10:25:23.513404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:20.151 [2024-11-06 10:25:23.513568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.151 [2024-11-06 10:25:23.513568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:20.722 [2024-11-06 10:25:24.208849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.722 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:20.983 Malloc0 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:20.983 [2024-11-06 10:25:24.276006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:20.983 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:20.983 { 00:33:20.983 "params": { 00:33:20.983 "name": "Nvme$subsystem", 00:33:20.983 "trtype": "$TEST_TRANSPORT", 00:33:20.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.983 "adrfam": "ipv4", 00:33:20.983 "trsvcid": "$NVMF_PORT", 00:33:20.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.984 "hdgst": ${hdgst:-false}, 00:33:20.984 "ddgst": ${ddgst:-false} 00:33:20.984 }, 00:33:20.984 "method": "bdev_nvme_attach_controller" 00:33:20.984 } 00:33:20.984 EOF 00:33:20.984 )") 00:33:20.984 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:20.984 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:20.984 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:20.984 10:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:20.984 "params": { 00:33:20.984 "name": "Nvme1", 00:33:20.984 "trtype": "tcp", 00:33:20.984 "traddr": "10.0.0.2", 00:33:20.984 "adrfam": "ipv4", 00:33:20.984 "trsvcid": "4420", 00:33:20.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:20.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:20.984 "hdgst": false, 00:33:20.984 "ddgst": false 00:33:20.984 }, 00:33:20.984 "method": "bdev_nvme_attach_controller" 00:33:20.984 }' 00:33:20.984 [2024-11-06 10:25:24.339534] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:20.984 [2024-11-06 10:25:24.339584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090176 ] 00:33:20.984 [2024-11-06 10:25:24.416838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.984 [2024-11-06 10:25:24.452841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.244 Running I/O for 1 seconds... 00:33:22.185 8821.00 IOPS, 34.46 MiB/s 00:33:22.185 Latency(us) 00:33:22.185 [2024-11-06T09:25:25.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.185 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:22.185 Verification LBA range: start 0x0 length 0x4000 00:33:22.185 Nvme1n1 : 1.01 8903.25 34.78 0.00 0.00 14288.19 2962.77 14199.47 00:33:22.185 [2024-11-06T09:25:25.686Z] =================================================================================================================== 00:33:22.185 [2024-11-06T09:25:25.686Z] Total : 8903.25 34.78 0.00 0.00 14288.19 2962.77 14199.47 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4090510 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:22.445 { 00:33:22.445 "params": { 00:33:22.445 "name": "Nvme$subsystem", 00:33:22.445 "trtype": "$TEST_TRANSPORT", 00:33:22.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:22.445 "adrfam": "ipv4", 00:33:22.445 "trsvcid": "$NVMF_PORT", 00:33:22.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:22.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:22.445 "hdgst": ${hdgst:-false}, 00:33:22.445 "ddgst": ${ddgst:-false} 00:33:22.445 }, 00:33:22.445 "method": "bdev_nvme_attach_controller" 00:33:22.445 } 00:33:22.445 EOF 00:33:22.445 )") 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:22.445 10:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:22.445 "params": { 00:33:22.445 "name": "Nvme1", 00:33:22.445 "trtype": "tcp", 00:33:22.445 "traddr": "10.0.0.2", 00:33:22.445 "adrfam": "ipv4", 00:33:22.445 "trsvcid": "4420", 00:33:22.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:22.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:22.445 "hdgst": false, 00:33:22.445 "ddgst": false 00:33:22.445 }, 00:33:22.445 "method": "bdev_nvme_attach_controller" 00:33:22.445 }' 00:33:22.445 [2024-11-06 10:25:25.775746] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:22.446 [2024-11-06 10:25:25.775801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090510 ] 00:33:22.446 [2024-11-06 10:25:25.852923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.446 [2024-11-06 10:25:25.888375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.705 Running I/O for 15 seconds... 00:33:25.028 10931.00 IOPS, 42.70 MiB/s [2024-11-06T09:25:28.793Z] 11491.50 IOPS, 44.89 MiB/s [2024-11-06T09:25:28.793Z] 10:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4090063 00:33:25.292 10:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:25.292 [2024-11-06 10:25:28.740665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.740708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.740728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.740739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.740750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.740758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.740769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.740778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.740790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.740800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.740811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.740820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.740832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.740841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.740853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.740957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.740972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.740980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.740994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.292 [2024-11-06 10:25:28.741440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.292 [2024-11-06 10:25:28.741447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.293 [2024-11-06 10:25:28.741811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.741828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.741845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.741864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.741881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.741898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.741915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.741934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.741953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.741970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.741986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.741996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.742003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.742014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.742022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.742031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.742039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.742048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.742056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.742065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.742072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.742082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.742089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.742099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.742106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.742116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.742123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.293 [2024-11-06 10:25:28.742133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.293 [2024-11-06 10:25:28.742140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.294 [2024-11-06 10:25:28.742818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.294 [2024-11-06 10:25:28.742825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.742834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.295 [2024-11-06 10:25:28.742841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.742850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.295 [2024-11-06 10:25:28.742858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.742871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.295 [2024-11-06 10:25:28.742879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.742888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.295 [2024-11-06 10:25:28.742895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.742905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.295 [2024-11-06 10:25:28.742912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.742921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.295 [2024-11-06 10:25:28.742928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.742937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.295 [2024-11-06 10:25:28.742944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.742957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.295 [2024-11-06 10:25:28.742965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.742974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.295 [2024-11-06 10:25:28.742981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.742990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.295 [2024-11-06 10:25:28.742997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.743006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.295 [2024-11-06 10:25:28.743013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.743023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.295 [2024-11-06 10:25:28.743031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.743041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.295 [2024-11-06 10:25:28.743048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.743057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.295 [2024-11-06 10:25:28.743064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.743073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1906b60 is same with the state(6) to be set 00:33:25.295 [2024-11-06 10:25:28.743083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:25.295 [2024-11-06 10:25:28.743089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:25.295 [2024-11-06 10:25:28.743095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110824 len:8 PRP1 0x0 PRP2 0x0 00:33:25.295 [2024-11-06 10:25:28.743103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.295 [2024-11-06 10:25:28.746675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.295 [2024-11-06 10:25:28.746728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.295 [2024-11-06 10:25:28.747527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.295 [2024-11-06 10:25:28.747544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.295 [2024-11-06 10:25:28.747553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.295 [2024-11-06 10:25:28.747770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.295 [2024-11-06 10:25:28.747993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.295 [2024-11-06 10:25:28.748003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.295 [2024-11-06 10:25:28.748012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.295 [2024-11-06 10:25:28.748020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.295 [2024-11-06 10:25:28.760800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.295 [2024-11-06 10:25:28.761473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.295 [2024-11-06 10:25:28.761510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.295 [2024-11-06 10:25:28.761521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.295 [2024-11-06 10:25:28.761760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.295 [2024-11-06 10:25:28.761990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.295 [2024-11-06 10:25:28.762000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.295 [2024-11-06 10:25:28.762009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.295 [2024-11-06 10:25:28.762023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.295 [2024-11-06 10:25:28.774595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.295 [2024-11-06 10:25:28.775127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.295 [2024-11-06 10:25:28.775165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.295 [2024-11-06 10:25:28.775177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.295 [2024-11-06 10:25:28.775413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.295 [2024-11-06 10:25:28.775634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.556 [2024-11-06 10:25:28.989558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.556 [2024-11-06 10:25:28.989591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.556 [2024-11-06 10:25:28.989603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.556 [2024-11-06 10:25:28.993571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.556 [2024-11-06 10:25:28.994287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.556 [2024-11-06 10:25:28.994329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.556 [2024-11-06 10:25:28.994341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.556 [2024-11-06 10:25:28.994578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.556 [2024-11-06 10:25:28.994800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.556 [2024-11-06 10:25:28.994809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.556 [2024-11-06 10:25:28.994818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.556 [2024-11-06 10:25:28.994827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.556 [2024-11-06 10:25:29.007411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.556 [2024-11-06 10:25:29.007968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.556 [2024-11-06 10:25:29.008008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.556 [2024-11-06 10:25:29.008020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.556 [2024-11-06 10:25:29.008257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.556 [2024-11-06 10:25:29.008480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.556 [2024-11-06 10:25:29.008490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.556 [2024-11-06 10:25:29.008498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.556 [2024-11-06 10:25:29.008507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.556 [2024-11-06 10:25:29.021294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.556 [2024-11-06 10:25:29.021877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.556 [2024-11-06 10:25:29.021898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.556 [2024-11-06 10:25:29.021906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.556 [2024-11-06 10:25:29.022124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.556 [2024-11-06 10:25:29.022341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.556 [2024-11-06 10:25:29.022350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.556 [2024-11-06 10:25:29.022357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.556 [2024-11-06 10:25:29.022364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.556 [2024-11-06 10:25:29.035145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.556 [2024-11-06 10:25:29.035837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.556 [2024-11-06 10:25:29.035884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.556 [2024-11-06 10:25:29.035897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.556 [2024-11-06 10:25:29.036132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.556 [2024-11-06 10:25:29.036353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.556 [2024-11-06 10:25:29.036363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.556 [2024-11-06 10:25:29.036371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.556 [2024-11-06 10:25:29.036379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.556 [2024-11-06 10:25:29.048956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.556 [2024-11-06 10:25:29.049500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.556 [2024-11-06 10:25:29.049520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.556 [2024-11-06 10:25:29.049528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.556 [2024-11-06 10:25:29.049745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.556 [2024-11-06 10:25:29.049968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.556 [2024-11-06 10:25:29.049978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.556 [2024-11-06 10:25:29.049986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.556 [2024-11-06 10:25:29.049993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.818 [2024-11-06 10:25:29.062769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.818 [2024-11-06 10:25:29.063321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.818 [2024-11-06 10:25:29.063359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.818 [2024-11-06 10:25:29.063371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.818 [2024-11-06 10:25:29.063612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.818 [2024-11-06 10:25:29.063833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.818 [2024-11-06 10:25:29.063843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.818 [2024-11-06 10:25:29.063851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.818 [2024-11-06 10:25:29.063859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.818 [2024-11-06 10:25:29.076637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.818 [2024-11-06 10:25:29.077315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.818 [2024-11-06 10:25:29.077354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.818 [2024-11-06 10:25:29.077365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.818 [2024-11-06 10:25:29.077600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.818 [2024-11-06 10:25:29.077821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.818 [2024-11-06 10:25:29.077831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.818 [2024-11-06 10:25:29.077839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.818 [2024-11-06 10:25:29.077847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.818 [2024-11-06 10:25:29.090456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.818 [2024-11-06 10:25:29.091146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.818 [2024-11-06 10:25:29.091185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.818 [2024-11-06 10:25:29.091196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.818 [2024-11-06 10:25:29.091432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.818 [2024-11-06 10:25:29.091652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.818 [2024-11-06 10:25:29.091663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.818 [2024-11-06 10:25:29.091671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.818 [2024-11-06 10:25:29.091680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.818 [2024-11-06 10:25:29.104256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.818 [2024-11-06 10:25:29.104791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.818 [2024-11-06 10:25:29.104812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.818 [2024-11-06 10:25:29.104820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.818 [2024-11-06 10:25:29.105044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.818 [2024-11-06 10:25:29.105262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.818 [2024-11-06 10:25:29.105276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.818 [2024-11-06 10:25:29.105284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.818 [2024-11-06 10:25:29.105291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.818 [2024-11-06 10:25:29.118085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.818 [2024-11-06 10:25:29.118700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.818 [2024-11-06 10:25:29.118739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.818 [2024-11-06 10:25:29.118751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.818 [2024-11-06 10:25:29.118995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.818 [2024-11-06 10:25:29.119217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.818 [2024-11-06 10:25:29.119227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.818 [2024-11-06 10:25:29.119235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.818 [2024-11-06 10:25:29.119244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.818 [2024-11-06 10:25:29.131819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.818 [2024-11-06 10:25:29.132536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.818 [2024-11-06 10:25:29.132575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.818 [2024-11-06 10:25:29.132587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.818 [2024-11-06 10:25:29.132824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.818 [2024-11-06 10:25:29.133062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.818 [2024-11-06 10:25:29.133074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.818 [2024-11-06 10:25:29.133083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.818 [2024-11-06 10:25:29.133091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.818 10059.67 IOPS, 39.30 MiB/s [2024-11-06T09:25:29.319Z] [2024-11-06 10:25:29.145660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.818 [2024-11-06 10:25:29.146291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.818 [2024-11-06 10:25:29.146331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.818 [2024-11-06 10:25:29.146342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.818 [2024-11-06 10:25:29.146577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.818 [2024-11-06 10:25:29.146798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.818 [2024-11-06 10:25:29.146808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.818 [2024-11-06 10:25:29.146816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.818 [2024-11-06 10:25:29.146829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.818 [2024-11-06 10:25:29.159399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.818 [2024-11-06 10:25:29.160074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.818 [2024-11-06 10:25:29.160112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.818 [2024-11-06 10:25:29.160125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.818 [2024-11-06 10:25:29.160361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.818 [2024-11-06 10:25:29.160582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.819 [2024-11-06 10:25:29.160592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.819 [2024-11-06 10:25:29.160601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.819 [2024-11-06 10:25:29.160609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.819 [2024-11-06 10:25:29.173192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.819 [2024-11-06 10:25:29.173845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.819 [2024-11-06 10:25:29.173892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.819 [2024-11-06 10:25:29.173903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.819 [2024-11-06 10:25:29.174139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.819 [2024-11-06 10:25:29.174359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.819 [2024-11-06 10:25:29.174369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.819 [2024-11-06 10:25:29.174378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.819 [2024-11-06 10:25:29.174386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.819 [2024-11-06 10:25:29.186960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.819 [2024-11-06 10:25:29.187607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.819 [2024-11-06 10:25:29.187647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.819 [2024-11-06 10:25:29.187658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.819 [2024-11-06 10:25:29.187903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.819 [2024-11-06 10:25:29.188125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.819 [2024-11-06 10:25:29.188136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.819 [2024-11-06 10:25:29.188144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.819 [2024-11-06 10:25:29.188152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.819 [2024-11-06 10:25:29.200745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.819 [2024-11-06 10:25:29.201398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.819 [2024-11-06 10:25:29.201438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.819 [2024-11-06 10:25:29.201449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.819 [2024-11-06 10:25:29.201684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.819 [2024-11-06 10:25:29.201916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.819 [2024-11-06 10:25:29.201927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.819 [2024-11-06 10:25:29.201935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.819 [2024-11-06 10:25:29.201943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.819 [2024-11-06 10:25:29.214523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.819 [2024-11-06 10:25:29.215154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.819 [2024-11-06 10:25:29.215193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.819 [2024-11-06 10:25:29.215205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.819 [2024-11-06 10:25:29.215440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.819 [2024-11-06 10:25:29.215662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.819 [2024-11-06 10:25:29.215672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.819 [2024-11-06 10:25:29.215680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.819 [2024-11-06 10:25:29.215688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.819 [2024-11-06 10:25:29.228287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.819 [2024-11-06 10:25:29.228858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.819 [2024-11-06 10:25:29.228884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.819 [2024-11-06 10:25:29.228892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.819 [2024-11-06 10:25:29.229109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.819 [2024-11-06 10:25:29.229326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.819 [2024-11-06 10:25:29.229336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.819 [2024-11-06 10:25:29.229344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.819 [2024-11-06 10:25:29.229351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.819 [2024-11-06 10:25:29.242138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.819 [2024-11-06 10:25:29.242813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.819 [2024-11-06 10:25:29.242852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.819 [2024-11-06 10:25:29.242876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.819 [2024-11-06 10:25:29.243113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.819 [2024-11-06 10:25:29.243335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.819 [2024-11-06 10:25:29.243345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.819 [2024-11-06 10:25:29.243353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.819 [2024-11-06 10:25:29.243362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.819 [2024-11-06 10:25:29.255928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.819 [2024-11-06 10:25:29.256662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.819 [2024-11-06 10:25:29.256702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.819 [2024-11-06 10:25:29.256713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.819 [2024-11-06 10:25:29.256959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.819 [2024-11-06 10:25:29.257181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.819 [2024-11-06 10:25:29.257192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.819 [2024-11-06 10:25:29.257200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.819 [2024-11-06 10:25:29.257208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.819 [2024-11-06 10:25:29.269790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.819 [2024-11-06 10:25:29.270471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.819 [2024-11-06 10:25:29.270510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.819 [2024-11-06 10:25:29.270521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.819 [2024-11-06 10:25:29.270757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.819 [2024-11-06 10:25:29.270986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.819 [2024-11-06 10:25:29.270998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.819 [2024-11-06 10:25:29.271006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.819 [2024-11-06 10:25:29.271014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.819 [2024-11-06 10:25:29.283593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.819 [2024-11-06 10:25:29.284202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.819 [2024-11-06 10:25:29.284242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.819 [2024-11-06 10:25:29.284253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.819 [2024-11-06 10:25:29.284488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.819 [2024-11-06 10:25:29.284714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.819 [2024-11-06 10:25:29.284724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.819 [2024-11-06 10:25:29.284733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.819 [2024-11-06 10:25:29.284741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.819 [2024-11-06 10:25:29.297350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.819 [2024-11-06 10:25:29.297905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.819 [2024-11-06 10:25:29.297933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.819 [2024-11-06 10:25:29.297942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.819 [2024-11-06 10:25:29.298163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.819 [2024-11-06 10:25:29.298382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.820 [2024-11-06 10:25:29.298391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.820 [2024-11-06 10:25:29.298399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.820 [2024-11-06 10:25:29.298406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:25.820 [2024-11-06 10:25:29.311188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:25.820 [2024-11-06 10:25:29.311851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.820 [2024-11-06 10:25:29.311898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:25.820 [2024-11-06 10:25:29.311910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:25.820 [2024-11-06 10:25:29.312146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:25.820 [2024-11-06 10:25:29.312366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:25.820 [2024-11-06 10:25:29.312376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:25.820 [2024-11-06 10:25:29.312385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:25.820 [2024-11-06 10:25:29.312393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.081 [2024-11-06 10:25:29.324977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.081 [2024-11-06 10:25:29.325650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.081 [2024-11-06 10:25:29.325689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.081 [2024-11-06 10:25:29.325700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.081 [2024-11-06 10:25:29.325947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.081 [2024-11-06 10:25:29.326170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.081 [2024-11-06 10:25:29.326180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.081 [2024-11-06 10:25:29.326188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.081 [2024-11-06 10:25:29.326202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.081 [2024-11-06 10:25:29.338763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.081 [2024-11-06 10:25:29.339384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.081 [2024-11-06 10:25:29.339424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.081 [2024-11-06 10:25:29.339435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.081 [2024-11-06 10:25:29.339671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.081 [2024-11-06 10:25:29.339902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.081 [2024-11-06 10:25:29.339913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.081 [2024-11-06 10:25:29.339922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.081 [2024-11-06 10:25:29.339930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.081 [2024-11-06 10:25:29.352695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.081 [2024-11-06 10:25:29.353345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.081 [2024-11-06 10:25:29.353384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.081 [2024-11-06 10:25:29.353395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.081 [2024-11-06 10:25:29.353630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.081 [2024-11-06 10:25:29.353851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.081 [2024-11-06 10:25:29.353872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.081 [2024-11-06 10:25:29.353881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.081 [2024-11-06 10:25:29.353889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.081 [2024-11-06 10:25:29.366450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.081 [2024-11-06 10:25:29.367067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.081 [2024-11-06 10:25:29.367106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.081 [2024-11-06 10:25:29.367117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.081 [2024-11-06 10:25:29.367352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.081 [2024-11-06 10:25:29.367573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.081 [2024-11-06 10:25:29.367583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.081 [2024-11-06 10:25:29.367592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.081 [2024-11-06 10:25:29.367600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.081 [2024-11-06 10:25:29.380378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.081 [2024-11-06 10:25:29.381086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.081 [2024-11-06 10:25:29.381125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.082 [2024-11-06 10:25:29.381137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.082 [2024-11-06 10:25:29.381373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.082 [2024-11-06 10:25:29.381594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.082 [2024-11-06 10:25:29.381604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.082 [2024-11-06 10:25:29.381612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.082 [2024-11-06 10:25:29.381620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.082 [2024-11-06 10:25:29.394211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.082 [2024-11-06 10:25:29.394885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.082 [2024-11-06 10:25:29.394923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.082 [2024-11-06 10:25:29.394935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.082 [2024-11-06 10:25:29.395173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.082 [2024-11-06 10:25:29.395393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.082 [2024-11-06 10:25:29.395403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.082 [2024-11-06 10:25:29.395411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.082 [2024-11-06 10:25:29.395419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.082 [2024-11-06 10:25:29.407992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.082 [2024-11-06 10:25:29.408640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.082 [2024-11-06 10:25:29.408679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.082 [2024-11-06 10:25:29.408690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.082 [2024-11-06 10:25:29.408936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.082 [2024-11-06 10:25:29.409158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.082 [2024-11-06 10:25:29.409169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.082 [2024-11-06 10:25:29.409177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.082 [2024-11-06 10:25:29.409185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.082 [2024-11-06 10:25:29.421767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.082 [2024-11-06 10:25:29.422347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.082 [2024-11-06 10:25:29.422367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.082 [2024-11-06 10:25:29.422379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.082 [2024-11-06 10:25:29.422596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.082 [2024-11-06 10:25:29.422813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.082 [2024-11-06 10:25:29.422822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.082 [2024-11-06 10:25:29.422829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.082 [2024-11-06 10:25:29.422836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.082 [2024-11-06 10:25:29.435607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.082 [2024-11-06 10:25:29.436135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.082 [2024-11-06 10:25:29.436153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.082 [2024-11-06 10:25:29.436161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.082 [2024-11-06 10:25:29.436377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.082 [2024-11-06 10:25:29.436593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.082 [2024-11-06 10:25:29.436603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.082 [2024-11-06 10:25:29.436610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.082 [2024-11-06 10:25:29.436617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.082 [2024-11-06 10:25:29.449384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.082 [2024-11-06 10:25:29.449877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.082 [2024-11-06 10:25:29.449896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.082 [2024-11-06 10:25:29.449904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.082 [2024-11-06 10:25:29.450120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.082 [2024-11-06 10:25:29.450336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.082 [2024-11-06 10:25:29.450346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.082 [2024-11-06 10:25:29.450354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.082 [2024-11-06 10:25:29.450360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.082 [2024-11-06 10:25:29.463119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.082 [2024-11-06 10:25:29.463777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.082 [2024-11-06 10:25:29.463816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.082 [2024-11-06 10:25:29.463827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.082 [2024-11-06 10:25:29.464072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.082 [2024-11-06 10:25:29.464299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.082 [2024-11-06 10:25:29.464309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.082 [2024-11-06 10:25:29.464318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.082 [2024-11-06 10:25:29.464326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.082 [2024-11-06 10:25:29.476888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.082 [2024-11-06 10:25:29.477450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.082 [2024-11-06 10:25:29.477470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.082 [2024-11-06 10:25:29.477478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.082 [2024-11-06 10:25:29.477694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.082 [2024-11-06 10:25:29.477919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.082 [2024-11-06 10:25:29.477931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.082 [2024-11-06 10:25:29.477938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.082 [2024-11-06 10:25:29.477945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.082 [2024-11-06 10:25:29.490717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.082 [2024-11-06 10:25:29.491344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.082 [2024-11-06 10:25:29.491383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.082 [2024-11-06 10:25:29.491394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.082 [2024-11-06 10:25:29.491630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.082 [2024-11-06 10:25:29.491873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.082 [2024-11-06 10:25:29.491886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.082 [2024-11-06 10:25:29.491895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.082 [2024-11-06 10:25:29.491903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.082 [2024-11-06 10:25:29.504463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.082 [2024-11-06 10:25:29.505136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.082 [2024-11-06 10:25:29.505176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.082 [2024-11-06 10:25:29.505187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.082 [2024-11-06 10:25:29.505422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.082 [2024-11-06 10:25:29.505643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.082 [2024-11-06 10:25:29.505653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.082 [2024-11-06 10:25:29.505661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.082 [2024-11-06 10:25:29.505674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.082 [2024-11-06 10:25:29.518246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.082 [2024-11-06 10:25:29.518752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.082 [2024-11-06 10:25:29.518792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.083 [2024-11-06 10:25:29.518803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.083 [2024-11-06 10:25:29.519049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.083 [2024-11-06 10:25:29.519272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.083 [2024-11-06 10:25:29.519282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.083 [2024-11-06 10:25:29.519291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.083 [2024-11-06 10:25:29.519299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.083 [2024-11-06 10:25:29.532085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.083 [2024-11-06 10:25:29.532719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.083 [2024-11-06 10:25:29.532758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.083 [2024-11-06 10:25:29.532769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.083 [2024-11-06 10:25:29.533013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.083 [2024-11-06 10:25:29.533236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.083 [2024-11-06 10:25:29.533247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.083 [2024-11-06 10:25:29.533255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.083 [2024-11-06 10:25:29.533263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.083 [2024-11-06 10:25:29.545826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.083 [2024-11-06 10:25:29.546497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.083 [2024-11-06 10:25:29.546536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.083 [2024-11-06 10:25:29.546547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.083 [2024-11-06 10:25:29.546782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.083 [2024-11-06 10:25:29.547014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.083 [2024-11-06 10:25:29.547025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.083 [2024-11-06 10:25:29.547033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.083 [2024-11-06 10:25:29.547041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.083 [2024-11-06 10:25:29.559606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.083 [2024-11-06 10:25:29.560245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.083 [2024-11-06 10:25:29.560285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.083 [2024-11-06 10:25:29.560296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.083 [2024-11-06 10:25:29.560531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.083 [2024-11-06 10:25:29.560752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.083 [2024-11-06 10:25:29.560762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.083 [2024-11-06 10:25:29.560771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.083 [2024-11-06 10:25:29.560779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.083 [2024-11-06 10:25:29.573348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.083 [2024-11-06 10:25:29.574044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.083 [2024-11-06 10:25:29.574083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.083 [2024-11-06 10:25:29.574094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.083 [2024-11-06 10:25:29.574330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.083 [2024-11-06 10:25:29.574551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.083 [2024-11-06 10:25:29.574561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.083 [2024-11-06 10:25:29.574569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.083 [2024-11-06 10:25:29.574577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.357 [2024-11-06 10:25:29.587174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.358 [2024-11-06 10:25:29.587805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.358 [2024-11-06 10:25:29.587844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.358 [2024-11-06 10:25:29.587855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.358 [2024-11-06 10:25:29.588101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.358 [2024-11-06 10:25:29.588323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.358 [2024-11-06 10:25:29.588333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.358 [2024-11-06 10:25:29.588342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.358 [2024-11-06 10:25:29.588350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.358 [2024-11-06 10:25:29.600935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.358 [2024-11-06 10:25:29.601586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.358 [2024-11-06 10:25:29.601625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.358 [2024-11-06 10:25:29.601641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.358 [2024-11-06 10:25:29.601887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.358 [2024-11-06 10:25:29.602109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.358 [2024-11-06 10:25:29.602119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.358 [2024-11-06 10:25:29.602128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.358 [2024-11-06 10:25:29.602135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.358 [2024-11-06 10:25:29.614697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.358 [2024-11-06 10:25:29.615369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.358 [2024-11-06 10:25:29.615408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.358 [2024-11-06 10:25:29.615419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.358 [2024-11-06 10:25:29.615655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.358 [2024-11-06 10:25:29.615886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.358 [2024-11-06 10:25:29.615897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.358 [2024-11-06 10:25:29.615906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.358 [2024-11-06 10:25:29.615914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.358 [2024-11-06 10:25:29.628478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.358 [2024-11-06 10:25:29.629121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.358 [2024-11-06 10:25:29.629160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.358 [2024-11-06 10:25:29.629171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.358 [2024-11-06 10:25:29.629406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.358 [2024-11-06 10:25:29.629627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.358 [2024-11-06 10:25:29.629637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.358 [2024-11-06 10:25:29.629645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.358 [2024-11-06 10:25:29.629653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.358 [2024-11-06 10:25:29.642230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.358 [2024-11-06 10:25:29.642758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.358 [2024-11-06 10:25:29.642778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.358 [2024-11-06 10:25:29.642786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.359 [2024-11-06 10:25:29.643009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.359 [2024-11-06 10:25:29.643227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.359 [2024-11-06 10:25:29.643244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.359 [2024-11-06 10:25:29.643252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.359 [2024-11-06 10:25:29.643259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.359 [2024-11-06 10:25:29.656020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.359 [2024-11-06 10:25:29.656586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.359 [2024-11-06 10:25:29.656604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.359 [2024-11-06 10:25:29.656611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.359 [2024-11-06 10:25:29.656827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.359 [2024-11-06 10:25:29.657051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.359 [2024-11-06 10:25:29.657061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.359 [2024-11-06 10:25:29.657068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.359 [2024-11-06 10:25:29.657075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.359 [2024-11-06 10:25:29.669830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.359 [2024-11-06 10:25:29.670347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.359 [2024-11-06 10:25:29.670364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.359 [2024-11-06 10:25:29.670372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.359 [2024-11-06 10:25:29.670587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.359 [2024-11-06 10:25:29.670805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.359 [2024-11-06 10:25:29.670813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.359 [2024-11-06 10:25:29.670820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.359 [2024-11-06 10:25:29.670827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.360 [2024-11-06 10:25:29.683584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.360 [2024-11-06 10:25:29.684097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.360 [2024-11-06 10:25:29.684115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.360 [2024-11-06 10:25:29.684123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.360 [2024-11-06 10:25:29.684339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.360 [2024-11-06 10:25:29.684555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.360 [2024-11-06 10:25:29.684565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.360 [2024-11-06 10:25:29.684572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.360 [2024-11-06 10:25:29.684583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.360 [2024-11-06 10:25:29.697443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.360 [2024-11-06 10:25:29.698115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.360 [2024-11-06 10:25:29.698154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.360 [2024-11-06 10:25:29.698165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.360 [2024-11-06 10:25:29.698401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.360 [2024-11-06 10:25:29.698621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.360 [2024-11-06 10:25:29.698631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.360 [2024-11-06 10:25:29.698639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.360 [2024-11-06 10:25:29.698648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.360 [2024-11-06 10:25:29.711225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.360 [2024-11-06 10:25:29.711896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.360 [2024-11-06 10:25:29.711935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.360 [2024-11-06 10:25:29.711946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.360 [2024-11-06 10:25:29.712182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.360 [2024-11-06 10:25:29.712403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.360 [2024-11-06 10:25:29.712412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.360 [2024-11-06 10:25:29.712421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.360 [2024-11-06 10:25:29.712429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.360 [2024-11-06 10:25:29.725001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.360 [2024-11-06 10:25:29.725627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.360 [2024-11-06 10:25:29.725666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.360 [2024-11-06 10:25:29.725677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.360 [2024-11-06 10:25:29.725923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.360 [2024-11-06 10:25:29.726146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.360 [2024-11-06 10:25:29.726156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.360 [2024-11-06 10:25:29.726164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.360 [2024-11-06 10:25:29.726172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.360 [2024-11-06 10:25:29.738739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.360 [2024-11-06 10:25:29.739418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.360 [2024-11-06 10:25:29.739458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.360 [2024-11-06 10:25:29.739469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.360 [2024-11-06 10:25:29.739704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.360 [2024-11-06 10:25:29.739935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.360 [2024-11-06 10:25:29.739946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.360 [2024-11-06 10:25:29.739955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.360 [2024-11-06 10:25:29.739963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.360 [2024-11-06 10:25:29.752519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.360 [2024-11-06 10:25:29.753159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.360 [2024-11-06 10:25:29.753199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.360 [2024-11-06 10:25:29.753211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.360 [2024-11-06 10:25:29.753448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.360 [2024-11-06 10:25:29.753670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.360 [2024-11-06 10:25:29.753680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.360 [2024-11-06 10:25:29.753689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.360 [2024-11-06 10:25:29.753698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.360 [2024-11-06 10:25:29.766273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.361 [2024-11-06 10:25:29.766968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.361 [2024-11-06 10:25:29.767007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.361 [2024-11-06 10:25:29.767020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.361 [2024-11-06 10:25:29.767257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.361 [2024-11-06 10:25:29.767478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.361 [2024-11-06 10:25:29.767488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.361 [2024-11-06 10:25:29.767496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.361 [2024-11-06 10:25:29.767504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.361 [2024-11-06 10:25:29.780078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.361 [2024-11-06 10:25:29.780711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.361 [2024-11-06 10:25:29.780751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.361 [2024-11-06 10:25:29.780768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.361 [2024-11-06 10:25:29.781015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.361 [2024-11-06 10:25:29.781237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.361 [2024-11-06 10:25:29.781248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.361 [2024-11-06 10:25:29.781257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.361 [2024-11-06 10:25:29.781265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.361 [2024-11-06 10:25:29.793860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.361 [2024-11-06 10:25:29.794562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.361 [2024-11-06 10:25:29.794601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.361 [2024-11-06 10:25:29.794614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.361 [2024-11-06 10:25:29.794851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.361 [2024-11-06 10:25:29.795081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.361 [2024-11-06 10:25:29.795092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.361 [2024-11-06 10:25:29.795100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.361 [2024-11-06 10:25:29.795108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.361 [2024-11-06 10:25:29.807691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.361 [2024-11-06 10:25:29.808260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.361 [2024-11-06 10:25:29.808280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.361 [2024-11-06 10:25:29.808289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.361 [2024-11-06 10:25:29.808505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.361 [2024-11-06 10:25:29.808722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.361 [2024-11-06 10:25:29.808730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.361 [2024-11-06 10:25:29.808737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.361 [2024-11-06 10:25:29.808744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.361 [2024-11-06 10:25:29.821530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.361 [2024-11-06 10:25:29.822180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.361 [2024-11-06 10:25:29.822218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.361 [2024-11-06 10:25:29.822229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.361 [2024-11-06 10:25:29.822464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.361 [2024-11-06 10:25:29.822684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.361 [2024-11-06 10:25:29.822697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.361 [2024-11-06 10:25:29.822706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.361 [2024-11-06 10:25:29.822714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.361 [2024-11-06 10:25:29.835287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.361 [2024-11-06 10:25:29.835893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.361 [2024-11-06 10:25:29.835932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.361 [2024-11-06 10:25:29.835944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.361 [2024-11-06 10:25:29.836183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.361 [2024-11-06 10:25:29.836403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.361 [2024-11-06 10:25:29.836412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.361 [2024-11-06 10:25:29.836420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.361 [2024-11-06 10:25:29.836428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.361 [2024-11-06 10:25:29.849209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.361 [2024-11-06 10:25:29.849792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.361 [2024-11-06 10:25:29.849811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.361 [2024-11-06 10:25:29.849819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.361 [2024-11-06 10:25:29.850041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.361 [2024-11-06 10:25:29.850257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.362 [2024-11-06 10:25:29.850266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.362 [2024-11-06 10:25:29.850273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.362 [2024-11-06 10:25:29.850280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.624 [2024-11-06 10:25:29.863068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.624 [2024-11-06 10:25:29.863583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-11-06 10:25:29.863600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.624 [2024-11-06 10:25:29.863608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.624 [2024-11-06 10:25:29.863824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.624 [2024-11-06 10:25:29.864046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.624 [2024-11-06 10:25:29.864056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.624 [2024-11-06 10:25:29.864063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.624 [2024-11-06 10:25:29.864075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.624 [2024-11-06 10:25:29.876865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.624 [2024-11-06 10:25:29.877419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-11-06 10:25:29.877456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.624 [2024-11-06 10:25:29.877469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.624 [2024-11-06 10:25:29.877703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.624 [2024-11-06 10:25:29.877932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.624 [2024-11-06 10:25:29.877942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.624 [2024-11-06 10:25:29.877950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.624 [2024-11-06 10:25:29.877958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.624 [2024-11-06 10:25:29.890770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.624 [2024-11-06 10:25:29.891359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-11-06 10:25:29.891380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.624 [2024-11-06 10:25:29.891388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.624 [2024-11-06 10:25:29.891604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.624 [2024-11-06 10:25:29.891820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.624 [2024-11-06 10:25:29.891828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.624 [2024-11-06 10:25:29.891835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.624 [2024-11-06 10:25:29.891842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.624 [2024-11-06 10:25:29.904675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.624 [2024-11-06 10:25:29.905218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-11-06 10:25:29.905235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.624 [2024-11-06 10:25:29.905242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.624 [2024-11-06 10:25:29.905458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.624 [2024-11-06 10:25:29.905673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.625 [2024-11-06 10:25:29.905681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.625 [2024-11-06 10:25:29.905689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.625 [2024-11-06 10:25:29.905695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.625 [2024-11-06 10:25:29.918483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.625 [2024-11-06 10:25:29.919139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-11-06 10:25:29.919178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.625 [2024-11-06 10:25:29.919189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.625 [2024-11-06 10:25:29.919424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.625 [2024-11-06 10:25:29.919646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.625 [2024-11-06 10:25:29.919655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.625 [2024-11-06 10:25:29.919662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.625 [2024-11-06 10:25:29.919670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.625 [2024-11-06 10:25:29.932268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.625 [2024-11-06 10:25:29.932821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-11-06 10:25:29.932859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.625 [2024-11-06 10:25:29.932879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.625 [2024-11-06 10:25:29.933114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.625 [2024-11-06 10:25:29.933335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.625 [2024-11-06 10:25:29.933344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.625 [2024-11-06 10:25:29.933351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.625 [2024-11-06 10:25:29.933359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.625 [2024-11-06 10:25:29.946156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.625 [2024-11-06 10:25:29.946708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-11-06 10:25:29.946746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.625 [2024-11-06 10:25:29.946759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.625 [2024-11-06 10:25:29.947004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.625 [2024-11-06 10:25:29.947225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.625 [2024-11-06 10:25:29.947234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.625 [2024-11-06 10:25:29.947242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.625 [2024-11-06 10:25:29.947250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.625 [2024-11-06 10:25:29.960048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.625 [2024-11-06 10:25:29.960705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-11-06 10:25:29.960743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.625 [2024-11-06 10:25:29.960758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.625 [2024-11-06 10:25:29.961003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.625 [2024-11-06 10:25:29.961225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.625 [2024-11-06 10:25:29.961235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.625 [2024-11-06 10:25:29.961242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.625 [2024-11-06 10:25:29.961251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.625 [2024-11-06 10:25:29.973839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.625 [2024-11-06 10:25:29.974516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-11-06 10:25:29.974555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.625 [2024-11-06 10:25:29.974566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.625 [2024-11-06 10:25:29.974802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.625 [2024-11-06 10:25:29.975032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.625 [2024-11-06 10:25:29.975041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.625 [2024-11-06 10:25:29.975050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.625 [2024-11-06 10:25:29.975058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.625 [2024-11-06 10:25:29.987733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.625 [2024-11-06 10:25:29.988392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-11-06 10:25:29.988430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.625 [2024-11-06 10:25:29.988440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.625 [2024-11-06 10:25:29.988676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.625 [2024-11-06 10:25:29.988905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.625 [2024-11-06 10:25:29.988915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.625 [2024-11-06 10:25:29.988924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.625 [2024-11-06 10:25:29.988931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.625 [2024-11-06 10:25:30.001546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.625 [2024-11-06 10:25:30.002203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-11-06 10:25:30.002241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.625 [2024-11-06 10:25:30.002253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.625 [2024-11-06 10:25:30.002488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.625 [2024-11-06 10:25:30.002709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.625 [2024-11-06 10:25:30.002722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.625 [2024-11-06 10:25:30.002731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.625 [2024-11-06 10:25:30.002739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.625 [2024-11-06 10:25:30.015441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.625 [2024-11-06 10:25:30.016117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-11-06 10:25:30.016155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.625 [2024-11-06 10:25:30.016166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.625 [2024-11-06 10:25:30.016401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.625 [2024-11-06 10:25:30.016622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.625 [2024-11-06 10:25:30.016631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.625 [2024-11-06 10:25:30.016640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.625 [2024-11-06 10:25:30.016648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.625 [2024-11-06 10:25:30.029244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.625 [2024-11-06 10:25:30.029929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-11-06 10:25:30.029968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.625 [2024-11-06 10:25:30.029980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.625 [2024-11-06 10:25:30.030216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.625 [2024-11-06 10:25:30.030438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.625 [2024-11-06 10:25:30.030447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.625 [2024-11-06 10:25:30.030455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.625 [2024-11-06 10:25:30.030463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.625 [2024-11-06 10:25:30.043055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.625 [2024-11-06 10:25:30.043668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-11-06 10:25:30.043687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.625 [2024-11-06 10:25:30.043695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.626 [2024-11-06 10:25:30.043919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.626 [2024-11-06 10:25:30.044136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.626 [2024-11-06 10:25:30.044145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.626 [2024-11-06 10:25:30.044152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.626 [2024-11-06 10:25:30.044163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.626 [2024-11-06 10:25:30.056966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.626 [2024-11-06 10:25:30.057493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-11-06 10:25:30.057511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.626 [2024-11-06 10:25:30.057518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.626 [2024-11-06 10:25:30.057734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.626 [2024-11-06 10:25:30.057956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.626 [2024-11-06 10:25:30.057965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.626 [2024-11-06 10:25:30.057973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.626 [2024-11-06 10:25:30.057979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.626 [2024-11-06 10:25:30.070770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.626 [2024-11-06 10:25:30.071205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-11-06 10:25:30.071222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.626 [2024-11-06 10:25:30.071229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.626 [2024-11-06 10:25:30.071444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.626 [2024-11-06 10:25:30.071661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.626 [2024-11-06 10:25:30.071669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.626 [2024-11-06 10:25:30.071676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.626 [2024-11-06 10:25:30.071683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.626 [2024-11-06 10:25:30.084526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.626 [2024-11-06 10:25:30.085201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-11-06 10:25:30.085240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.626 [2024-11-06 10:25:30.085251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.626 [2024-11-06 10:25:30.085487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.626 [2024-11-06 10:25:30.085707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.626 [2024-11-06 10:25:30.085716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.626 [2024-11-06 10:25:30.085725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.626 [2024-11-06 10:25:30.085733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.626 [2024-11-06 10:25:30.098345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.626 [2024-11-06 10:25:30.098990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-11-06 10:25:30.099028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.626 [2024-11-06 10:25:30.099040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.626 [2024-11-06 10:25:30.099278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.626 [2024-11-06 10:25:30.099498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.626 [2024-11-06 10:25:30.099506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.626 [2024-11-06 10:25:30.099514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.626 [2024-11-06 10:25:30.099522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.626 [2024-11-06 10:25:30.112111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.626 [2024-11-06 10:25:30.112701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-11-06 10:25:30.112720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.626 [2024-11-06 10:25:30.112728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.626 [2024-11-06 10:25:30.112953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.626 [2024-11-06 10:25:30.113171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.626 [2024-11-06 10:25:30.113179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.626 [2024-11-06 10:25:30.113186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.626 [2024-11-06 10:25:30.113193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.888 [2024-11-06 10:25:30.125989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.888 [2024-11-06 10:25:30.126584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.888 [2024-11-06 10:25:30.126622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.888 [2024-11-06 10:25:30.126633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.888 [2024-11-06 10:25:30.126878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.888 [2024-11-06 10:25:30.127101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.888 [2024-11-06 10:25:30.127111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.888 [2024-11-06 10:25:30.127119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.888 [2024-11-06 10:25:30.127128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.888 7544.75 IOPS, 29.47 MiB/s [2024-11-06T09:25:30.389Z] [2024-11-06 10:25:30.141567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.888 [2024-11-06 10:25:30.142060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.888 [2024-11-06 10:25:30.142099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.888 [2024-11-06 10:25:30.142115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.888 [2024-11-06 10:25:30.142351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.888 [2024-11-06 10:25:30.142571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.888 [2024-11-06 10:25:30.142580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.888 [2024-11-06 10:25:30.142588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.888 [2024-11-06 10:25:30.142596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.888 [2024-11-06 10:25:30.155380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.888 [2024-11-06 10:25:30.155969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.888 [2024-11-06 10:25:30.156007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.888 [2024-11-06 10:25:30.156019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.888 [2024-11-06 10:25:30.156258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.888 [2024-11-06 10:25:30.156478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.888 [2024-11-06 10:25:30.156487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.888 [2024-11-06 10:25:30.156496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.888 [2024-11-06 10:25:30.156504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.888 [2024-11-06 10:25:30.169293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.888 [2024-11-06 10:25:30.169859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.888 [2024-11-06 10:25:30.169906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.888 [2024-11-06 10:25:30.169919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.888 [2024-11-06 10:25:30.170158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.888 [2024-11-06 10:25:30.170379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.888 [2024-11-06 10:25:30.170388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.888 [2024-11-06 10:25:30.170396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.888 [2024-11-06 10:25:30.170404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.888 [2024-11-06 10:25:30.183190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.888 [2024-11-06 10:25:30.183845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.888 [2024-11-06 10:25:30.183891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.888 [2024-11-06 10:25:30.183902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.888 [2024-11-06 10:25:30.184138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.888 [2024-11-06 10:25:30.184365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.888 [2024-11-06 10:25:30.184374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.888 [2024-11-06 10:25:30.184383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.888 [2024-11-06 10:25:30.184391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.888 [2024-11-06 10:25:30.196993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.888 [2024-11-06 10:25:30.197555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.888 [2024-11-06 10:25:30.197594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.888 [2024-11-06 10:25:30.197605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.888 [2024-11-06 10:25:30.197841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.888 [2024-11-06 10:25:30.198071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.888 [2024-11-06 10:25:30.198081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.888 [2024-11-06 10:25:30.198089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.888 [2024-11-06 10:25:30.198097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.888 [2024-11-06 10:25:30.210882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.888 [2024-11-06 10:25:30.211453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.888 [2024-11-06 10:25:30.211491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.888 [2024-11-06 10:25:30.211502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.888 [2024-11-06 10:25:30.211737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.888 [2024-11-06 10:25:30.211966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.888 [2024-11-06 10:25:30.211976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.888 [2024-11-06 10:25:30.211985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.888 [2024-11-06 10:25:30.211993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.888 [2024-11-06 10:25:30.224771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.888 [2024-11-06 10:25:30.225431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.888 [2024-11-06 10:25:30.225470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.889 [2024-11-06 10:25:30.225482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.889 [2024-11-06 10:25:30.225716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.889 [2024-11-06 10:25:30.225945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.889 [2024-11-06 10:25:30.225955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.889 [2024-11-06 10:25:30.225968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.889 [2024-11-06 10:25:30.225976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.889 [2024-11-06 10:25:30.238548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.889 [2024-11-06 10:25:30.239205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.889 [2024-11-06 10:25:30.239244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.889 [2024-11-06 10:25:30.239257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.889 [2024-11-06 10:25:30.239495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.889 [2024-11-06 10:25:30.239717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.889 [2024-11-06 10:25:30.239726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.889 [2024-11-06 10:25:30.239733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.889 [2024-11-06 10:25:30.239741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.889 [2024-11-06 10:25:30.252324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.889 [2024-11-06 10:25:30.253072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.889 [2024-11-06 10:25:30.253110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.889 [2024-11-06 10:25:30.253121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.889 [2024-11-06 10:25:30.253356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.889 [2024-11-06 10:25:30.253577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.889 [2024-11-06 10:25:30.253586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.889 [2024-11-06 10:25:30.253594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.889 [2024-11-06 10:25:30.253602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.889 [2024-11-06 10:25:30.266187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.889 [2024-11-06 10:25:30.266893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.889 [2024-11-06 10:25:30.266932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.889 [2024-11-06 10:25:30.266943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.889 [2024-11-06 10:25:30.267179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.889 [2024-11-06 10:25:30.267399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.889 [2024-11-06 10:25:30.267408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.889 [2024-11-06 10:25:30.267417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.889 [2024-11-06 10:25:30.267425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.889 [2024-11-06 10:25:30.280008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.889 [2024-11-06 10:25:30.280683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.889 [2024-11-06 10:25:30.280721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.889 [2024-11-06 10:25:30.280732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.889 [2024-11-06 10:25:30.280975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.889 [2024-11-06 10:25:30.281196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.889 [2024-11-06 10:25:30.281205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.889 [2024-11-06 10:25:30.281214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.889 [2024-11-06 10:25:30.281222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.889 [2024-11-06 10:25:30.293813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.889 [2024-11-06 10:25:30.294373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.889 [2024-11-06 10:25:30.294411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.889 [2024-11-06 10:25:30.294424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.889 [2024-11-06 10:25:30.294662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.889 [2024-11-06 10:25:30.294900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.889 [2024-11-06 10:25:30.294911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.889 [2024-11-06 10:25:30.294920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.889 [2024-11-06 10:25:30.294928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.889 [2024-11-06 10:25:30.307703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.889 [2024-11-06 10:25:30.308281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.889 [2024-11-06 10:25:30.308302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.889 [2024-11-06 10:25:30.308310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.889 [2024-11-06 10:25:30.308526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.889 [2024-11-06 10:25:30.308742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.889 [2024-11-06 10:25:30.308751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.889 [2024-11-06 10:25:30.308758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.889 [2024-11-06 10:25:30.308765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.889 [2024-11-06 10:25:30.321541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.889 [2024-11-06 10:25:30.322128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.889 [2024-11-06 10:25:30.322166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.889 [2024-11-06 10:25:30.322187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.889 [2024-11-06 10:25:30.322422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.889 [2024-11-06 10:25:30.322644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.889 [2024-11-06 10:25:30.322653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.889 [2024-11-06 10:25:30.322661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.889 [2024-11-06 10:25:30.322669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.889 [2024-11-06 10:25:30.335457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.889 [2024-11-06 10:25:30.336188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.889 [2024-11-06 10:25:30.336227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.889 [2024-11-06 10:25:30.336240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.889 [2024-11-06 10:25:30.336477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.889 [2024-11-06 10:25:30.336697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.889 [2024-11-06 10:25:30.336707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.889 [2024-11-06 10:25:30.336715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.889 [2024-11-06 10:25:30.336723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.889 [2024-11-06 10:25:30.349302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.889 [2024-11-06 10:25:30.349952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.889 [2024-11-06 10:25:30.349990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.889 [2024-11-06 10:25:30.350002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.889 [2024-11-06 10:25:30.350237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.889 [2024-11-06 10:25:30.350458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.889 [2024-11-06 10:25:30.350467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.889 [2024-11-06 10:25:30.350475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.889 [2024-11-06 10:25:30.350484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.889 [2024-11-06 10:25:30.363063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.889 [2024-11-06 10:25:30.363642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.889 [2024-11-06 10:25:30.363662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.890 [2024-11-06 10:25:30.363670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.890 [2024-11-06 10:25:30.363891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.890 [2024-11-06 10:25:30.364112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.890 [2024-11-06 10:25:30.364121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.890 [2024-11-06 10:25:30.364128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.890 [2024-11-06 10:25:30.364135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:26.890 [2024-11-06 10:25:30.376907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:26.890 [2024-11-06 10:25:30.377332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.890 [2024-11-06 10:25:30.377349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:26.890 [2024-11-06 10:25:30.377357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:26.890 [2024-11-06 10:25:30.377572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:26.890 [2024-11-06 10:25:30.377788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:26.890 [2024-11-06 10:25:30.377796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:26.890 [2024-11-06 10:25:30.377803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:26.890 [2024-11-06 10:25:30.377810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.152 [2024-11-06 10:25:30.390801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.152 [2024-11-06 10:25:30.391453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.152 [2024-11-06 10:25:30.391492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.152 [2024-11-06 10:25:30.391503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.152 [2024-11-06 10:25:30.391739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.152 [2024-11-06 10:25:30.391967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.152 [2024-11-06 10:25:30.391977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.152 [2024-11-06 10:25:30.391985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.152 [2024-11-06 10:25:30.391992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.152 [2024-11-06 10:25:30.404624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.152 [2024-11-06 10:25:30.405310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.152 [2024-11-06 10:25:30.405348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.152 [2024-11-06 10:25:30.405359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.152 [2024-11-06 10:25:30.405594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.152 [2024-11-06 10:25:30.405815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.152 [2024-11-06 10:25:30.405824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.152 [2024-11-06 10:25:30.405832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.152 [2024-11-06 10:25:30.405845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.152 [2024-11-06 10:25:30.418421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.152 [2024-11-06 10:25:30.418956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.152 [2024-11-06 10:25:30.418976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.152 [2024-11-06 10:25:30.418985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.152 [2024-11-06 10:25:30.419201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.152 [2024-11-06 10:25:30.419417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.152 [2024-11-06 10:25:30.419425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.152 [2024-11-06 10:25:30.419432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.152 [2024-11-06 10:25:30.419439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.152 [2024-11-06 10:25:30.432217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.152 [2024-11-06 10:25:30.432782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.152 [2024-11-06 10:25:30.432798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.152 [2024-11-06 10:25:30.432806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.152 [2024-11-06 10:25:30.433028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.152 [2024-11-06 10:25:30.433244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.152 [2024-11-06 10:25:30.433252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.152 [2024-11-06 10:25:30.433259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.152 [2024-11-06 10:25:30.433265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.152 [2024-11-06 10:25:30.446037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.152 [2024-11-06 10:25:30.446653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.152 [2024-11-06 10:25:30.446691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.152 [2024-11-06 10:25:30.446702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.152 [2024-11-06 10:25:30.446946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.152 [2024-11-06 10:25:30.447168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.152 [2024-11-06 10:25:30.447177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.152 [2024-11-06 10:25:30.447185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.152 [2024-11-06 10:25:30.447193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.152 [2024-11-06 10:25:30.459769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.152 [2024-11-06 10:25:30.460440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.152 [2024-11-06 10:25:30.460478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.152 [2024-11-06 10:25:30.460491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.152 [2024-11-06 10:25:30.460727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.152 [2024-11-06 10:25:30.460956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.152 [2024-11-06 10:25:30.460966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.152 [2024-11-06 10:25:30.460974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.152 [2024-11-06 10:25:30.460982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.152 [2024-11-06 10:25:30.473553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.152 [2024-11-06 10:25:30.474010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.152 [2024-11-06 10:25:30.474031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.152 [2024-11-06 10:25:30.474039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.152 [2024-11-06 10:25:30.474257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.152 [2024-11-06 10:25:30.474473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.152 [2024-11-06 10:25:30.474481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.152 [2024-11-06 10:25:30.474488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.152 [2024-11-06 10:25:30.474495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.152 [2024-11-06 10:25:30.487480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.153 [2024-11-06 10:25:30.488028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.153 [2024-11-06 10:25:30.488047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.153 [2024-11-06 10:25:30.488055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.153 [2024-11-06 10:25:30.488271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.153 [2024-11-06 10:25:30.488488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.153 [2024-11-06 10:25:30.488496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.153 [2024-11-06 10:25:30.488504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.153 [2024-11-06 10:25:30.488511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.153 [2024-11-06 10:25:30.501313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.153 [2024-11-06 10:25:30.501875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.153 [2024-11-06 10:25:30.501893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.153 [2024-11-06 10:25:30.501905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.153 [2024-11-06 10:25:30.502121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.153 [2024-11-06 10:25:30.502337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.153 [2024-11-06 10:25:30.502344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.153 [2024-11-06 10:25:30.502351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.153 [2024-11-06 10:25:30.502358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.153 [2024-11-06 10:25:30.515131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.153 [2024-11-06 10:25:30.515770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.153 [2024-11-06 10:25:30.515809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.153 [2024-11-06 10:25:30.515821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.153 [2024-11-06 10:25:30.516066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.153 [2024-11-06 10:25:30.516288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.153 [2024-11-06 10:25:30.516297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.153 [2024-11-06 10:25:30.516305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.153 [2024-11-06 10:25:30.516313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.153 [2024-11-06 10:25:30.528891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.153 [2024-11-06 10:25:30.529391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.153 [2024-11-06 10:25:30.529430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.153 [2024-11-06 10:25:30.529443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.153 [2024-11-06 10:25:30.529680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.153 [2024-11-06 10:25:30.529908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.153 [2024-11-06 10:25:30.529918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.153 [2024-11-06 10:25:30.529926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.153 [2024-11-06 10:25:30.529934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.153 [2024-11-06 10:25:30.542714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.153 [2024-11-06 10:25:30.543428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.153 [2024-11-06 10:25:30.543465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.153 [2024-11-06 10:25:30.543476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.153 [2024-11-06 10:25:30.543712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.153 [2024-11-06 10:25:30.543946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.153 [2024-11-06 10:25:30.543956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.153 [2024-11-06 10:25:30.543965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.153 [2024-11-06 10:25:30.543973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.153 [2024-11-06 10:25:30.556549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.153 [2024-11-06 10:25:30.557220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.153 [2024-11-06 10:25:30.557259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.153 [2024-11-06 10:25:30.557271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.153 [2024-11-06 10:25:30.557508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.153 [2024-11-06 10:25:30.557729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.153 [2024-11-06 10:25:30.557738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.153 [2024-11-06 10:25:30.557747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.153 [2024-11-06 10:25:30.557755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.153 [2024-11-06 10:25:30.570337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.153 [2024-11-06 10:25:30.570966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.153 [2024-11-06 10:25:30.571004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.153 [2024-11-06 10:25:30.571016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.153 [2024-11-06 10:25:30.571255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.153 [2024-11-06 10:25:30.571475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.153 [2024-11-06 10:25:30.571485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.153 [2024-11-06 10:25:30.571493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.153 [2024-11-06 10:25:30.571500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.153 [2024-11-06 10:25:30.584080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.153 [2024-11-06 10:25:30.584716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.153 [2024-11-06 10:25:30.584754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.153 [2024-11-06 10:25:30.584765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.153 [2024-11-06 10:25:30.585011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.153 [2024-11-06 10:25:30.585233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.153 [2024-11-06 10:25:30.585242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.153 [2024-11-06 10:25:30.585250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.153 [2024-11-06 10:25:30.585262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.153 [2024-11-06 10:25:30.597850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.153 [2024-11-06 10:25:30.598436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.153 [2024-11-06 10:25:30.598473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.153 [2024-11-06 10:25:30.598484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.153 [2024-11-06 10:25:30.598719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.153 [2024-11-06 10:25:30.598948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.153 [2024-11-06 10:25:30.598958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.153 [2024-11-06 10:25:30.598966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.153 [2024-11-06 10:25:30.598974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.153 [2024-11-06 10:25:30.611746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.153 [2024-11-06 10:25:30.612368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.153 [2024-11-06 10:25:30.612406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.153 [2024-11-06 10:25:30.612418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.153 [2024-11-06 10:25:30.612654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.153 [2024-11-06 10:25:30.612883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.153 [2024-11-06 10:25:30.612893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.153 [2024-11-06 10:25:30.612902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.153 [2024-11-06 10:25:30.612909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.154 [2024-11-06 10:25:30.625684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.154 [2024-11-06 10:25:30.626364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.154 [2024-11-06 10:25:30.626402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.154 [2024-11-06 10:25:30.626413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.154 [2024-11-06 10:25:30.626648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.154 [2024-11-06 10:25:30.626877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.154 [2024-11-06 10:25:30.626887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.154 [2024-11-06 10:25:30.626895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.154 [2024-11-06 10:25:30.626904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.154 [2024-11-06 10:25:30.639475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.154 [2024-11-06 10:25:30.640180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.154 [2024-11-06 10:25:30.640218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.154 [2024-11-06 10:25:30.640229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.154 [2024-11-06 10:25:30.640465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.154 [2024-11-06 10:25:30.640685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.154 [2024-11-06 10:25:30.640694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.154 [2024-11-06 10:25:30.640702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.154 [2024-11-06 10:25:30.640710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.415 [2024-11-06 10:25:30.653288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.415 [2024-11-06 10:25:30.653951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.415 [2024-11-06 10:25:30.653990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.415 [2024-11-06 10:25:30.654002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.415 [2024-11-06 10:25:30.654240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.415 [2024-11-06 10:25:30.654462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.415 [2024-11-06 10:25:30.654471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.415 [2024-11-06 10:25:30.654479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.415 [2024-11-06 10:25:30.654487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.415 [2024-11-06 10:25:30.667064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.415 [2024-11-06 10:25:30.667607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.415 [2024-11-06 10:25:30.667626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.415 [2024-11-06 10:25:30.667634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.415 [2024-11-06 10:25:30.667851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.415 [2024-11-06 10:25:30.668074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.415 [2024-11-06 10:25:30.668083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.415 [2024-11-06 10:25:30.668090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.415 [2024-11-06 10:25:30.668097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.415 [2024-11-06 10:25:30.680856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.415 [2024-11-06 10:25:30.681358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.415 [2024-11-06 10:25:30.681374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.415 [2024-11-06 10:25:30.681386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.415 [2024-11-06 10:25:30.681602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.415 [2024-11-06 10:25:30.681817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.415 [2024-11-06 10:25:30.681826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.415 [2024-11-06 10:25:30.681833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.415 [2024-11-06 10:25:30.681839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.415 [2024-11-06 10:25:30.694618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.415 [2024-11-06 10:25:30.695066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.415 [2024-11-06 10:25:30.695083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.415 [2024-11-06 10:25:30.695091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.415 [2024-11-06 10:25:30.695307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.415 [2024-11-06 10:25:30.695522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.415 [2024-11-06 10:25:30.695531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.415 [2024-11-06 10:25:30.695538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.415 [2024-11-06 10:25:30.695544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.415 [2024-11-06 10:25:30.708532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.415 [2024-11-06 10:25:30.709151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.415 [2024-11-06 10:25:30.709189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.415 [2024-11-06 10:25:30.709200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.415 [2024-11-06 10:25:30.709435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.415 [2024-11-06 10:25:30.709656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.415 [2024-11-06 10:25:30.709664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.415 [2024-11-06 10:25:30.709672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.415 [2024-11-06 10:25:30.709680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.415 [2024-11-06 10:25:30.722349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.415 [2024-11-06 10:25:30.722889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.415 [2024-11-06 10:25:30.722928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.415 [2024-11-06 10:25:30.722940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.415 [2024-11-06 10:25:30.723176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.415 [2024-11-06 10:25:30.723401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.415 [2024-11-06 10:25:30.723411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.415 [2024-11-06 10:25:30.723418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.415 [2024-11-06 10:25:30.723426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.415 [2024-11-06 10:25:30.736208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.415 [2024-11-06 10:25:30.736829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.415 [2024-11-06 10:25:30.736873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.415 [2024-11-06 10:25:30.736885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.415 [2024-11-06 10:25:30.737121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.415 [2024-11-06 10:25:30.737341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.415 [2024-11-06 10:25:30.737349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.415 [2024-11-06 10:25:30.737358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.415 [2024-11-06 10:25:30.737366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.415 [2024-11-06 10:25:30.750141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.415 [2024-11-06 10:25:30.750813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.415 [2024-11-06 10:25:30.750851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.415 [2024-11-06 10:25:30.750871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.415 [2024-11-06 10:25:30.751107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.415 [2024-11-06 10:25:30.751327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.415 [2024-11-06 10:25:30.751336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.415 [2024-11-06 10:25:30.751344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.415 [2024-11-06 10:25:30.751352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.415 [2024-11-06 10:25:30.763920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.415 [2024-11-06 10:25:30.764570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.416 [2024-11-06 10:25:30.764608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.416 [2024-11-06 10:25:30.764619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.416 [2024-11-06 10:25:30.764856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.416 [2024-11-06 10:25:30.765086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.416 [2024-11-06 10:25:30.765095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.416 [2024-11-06 10:25:30.765103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.416 [2024-11-06 10:25:30.765116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.416 [2024-11-06 10:25:30.777681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.416 [2024-11-06 10:25:30.778318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.416 [2024-11-06 10:25:30.778357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.416 [2024-11-06 10:25:30.778367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.416 [2024-11-06 10:25:30.778603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.416 [2024-11-06 10:25:30.778823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.416 [2024-11-06 10:25:30.778832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.416 [2024-11-06 10:25:30.778840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.416 [2024-11-06 10:25:30.778847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.416 [2024-11-06 10:25:30.791432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.416 [2024-11-06 10:25:30.792114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.416 [2024-11-06 10:25:30.792152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.416 [2024-11-06 10:25:30.792163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.416 [2024-11-06 10:25:30.792399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.416 [2024-11-06 10:25:30.792619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.416 [2024-11-06 10:25:30.792628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.416 [2024-11-06 10:25:30.792636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.416 [2024-11-06 10:25:30.792644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.416 [2024-11-06 10:25:30.805240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.416 [2024-11-06 10:25:30.805898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.416 [2024-11-06 10:25:30.805937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.416 [2024-11-06 10:25:30.805949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.416 [2024-11-06 10:25:30.806188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.416 [2024-11-06 10:25:30.806408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.416 [2024-11-06 10:25:30.806417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.416 [2024-11-06 10:25:30.806426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.416 [2024-11-06 10:25:30.806434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.416 [2024-11-06 10:25:30.819008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.416 [2024-11-06 10:25:30.819660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.416 [2024-11-06 10:25:30.819697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.416 [2024-11-06 10:25:30.819709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.416 [2024-11-06 10:25:30.819953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.416 [2024-11-06 10:25:30.820175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.416 [2024-11-06 10:25:30.820183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.416 [2024-11-06 10:25:30.820191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.416 [2024-11-06 10:25:30.820199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.416 [2024-11-06 10:25:30.832765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.416 [2024-11-06 10:25:30.833421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.416 [2024-11-06 10:25:30.833459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.416 [2024-11-06 10:25:30.833470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.416 [2024-11-06 10:25:30.833705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.416 [2024-11-06 10:25:30.833935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.416 [2024-11-06 10:25:30.833945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.416 [2024-11-06 10:25:30.833953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.416 [2024-11-06 10:25:30.833961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.416 [2024-11-06 10:25:30.846526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.416 [2024-11-06 10:25:30.847193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.416 [2024-11-06 10:25:30.847231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.416 [2024-11-06 10:25:30.847242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.416 [2024-11-06 10:25:30.847477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.416 [2024-11-06 10:25:30.847697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.416 [2024-11-06 10:25:30.847706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.416 [2024-11-06 10:25:30.847714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.416 [2024-11-06 10:25:30.847722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.416 [2024-11-06 10:25:30.860297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.416 [2024-11-06 10:25:30.860972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.416 [2024-11-06 10:25:30.861010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.416 [2024-11-06 10:25:30.861025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.416 [2024-11-06 10:25:30.861261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.416 [2024-11-06 10:25:30.861481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.416 [2024-11-06 10:25:30.861490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.416 [2024-11-06 10:25:30.861498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.416 [2024-11-06 10:25:30.861506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.416 [2024-11-06 10:25:30.874081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.416 [2024-11-06 10:25:30.874709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.416 [2024-11-06 10:25:30.874747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.416 [2024-11-06 10:25:30.874758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.416 [2024-11-06 10:25:30.875001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.416 [2024-11-06 10:25:30.875223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.416 [2024-11-06 10:25:30.875231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.416 [2024-11-06 10:25:30.875240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.416 [2024-11-06 10:25:30.875248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.416 [2024-11-06 10:25:30.888021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.416 [2024-11-06 10:25:30.888695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.416 [2024-11-06 10:25:30.888733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.416 [2024-11-06 10:25:30.888744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.416 [2024-11-06 10:25:30.888987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.416 [2024-11-06 10:25:30.889208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.416 [2024-11-06 10:25:30.889216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.416 [2024-11-06 10:25:30.889224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.416 [2024-11-06 10:25:30.889232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.416 [2024-11-06 10:25:30.901817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.416 [2024-11-06 10:25:30.902360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.417 [2024-11-06 10:25:30.902380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.417 [2024-11-06 10:25:30.902388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.417 [2024-11-06 10:25:30.902604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.417 [2024-11-06 10:25:30.902825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.417 [2024-11-06 10:25:30.902833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.417 [2024-11-06 10:25:30.902840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.417 [2024-11-06 10:25:30.902847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.678 [2024-11-06 10:25:30.915647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.678 [2024-11-06 10:25:30.916118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.678 [2024-11-06 10:25:30.916136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.679 [2024-11-06 10:25:30.916143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.679 [2024-11-06 10:25:30.916359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.679 [2024-11-06 10:25:30.916575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.679 [2024-11-06 10:25:30.916583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.679 [2024-11-06 10:25:30.916590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.679 [2024-11-06 10:25:30.916596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.679 [2024-11-06 10:25:30.929572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.679 [2024-11-06 10:25:30.930152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.679 [2024-11-06 10:25:30.930168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.679 [2024-11-06 10:25:30.930176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.679 [2024-11-06 10:25:30.930392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.679 [2024-11-06 10:25:30.930607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.679 [2024-11-06 10:25:30.930614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.679 [2024-11-06 10:25:30.930622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.679 [2024-11-06 10:25:30.930628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.679 [2024-11-06 10:25:30.943391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.679 [2024-11-06 10:25:30.943910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.679 [2024-11-06 10:25:30.943927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.679 [2024-11-06 10:25:30.943935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.679 [2024-11-06 10:25:30.944150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.679 [2024-11-06 10:25:30.944366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.679 [2024-11-06 10:25:30.944374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.679 [2024-11-06 10:25:30.944381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.679 [2024-11-06 10:25:30.944392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.679 [2024-11-06 10:25:30.957156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.679 [2024-11-06 10:25:30.957768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.679 [2024-11-06 10:25:30.957806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.679 [2024-11-06 10:25:30.957817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.679 [2024-11-06 10:25:30.958061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.679 [2024-11-06 10:25:30.958283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.679 [2024-11-06 10:25:30.958291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.679 [2024-11-06 10:25:30.958300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.679 [2024-11-06 10:25:30.958308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.679 [2024-11-06 10:25:30.971079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.679 [2024-11-06 10:25:30.971723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.679 [2024-11-06 10:25:30.971761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.679 [2024-11-06 10:25:30.971771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.679 [2024-11-06 10:25:30.972016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.679 [2024-11-06 10:25:30.972237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.679 [2024-11-06 10:25:30.972246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.679 [2024-11-06 10:25:30.972254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.679 [2024-11-06 10:25:30.972262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.679 [2024-11-06 10:25:30.984830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.679 [2024-11-06 10:25:30.985504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.679 [2024-11-06 10:25:30.985542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.679 [2024-11-06 10:25:30.985553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.679 [2024-11-06 10:25:30.985788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.679 [2024-11-06 10:25:30.986018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.679 [2024-11-06 10:25:30.986028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.679 [2024-11-06 10:25:30.986036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.679 [2024-11-06 10:25:30.986044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.679 [2024-11-06 10:25:30.998628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.679 [2024-11-06 10:25:30.999278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.679 [2024-11-06 10:25:30.999316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.679 [2024-11-06 10:25:30.999327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.679 [2024-11-06 10:25:30.999562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.679 [2024-11-06 10:25:30.999782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.679 [2024-11-06 10:25:30.999792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.679 [2024-11-06 10:25:30.999800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.679 [2024-11-06 10:25:30.999808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.679 [2024-11-06 10:25:31.012386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.679 [2024-11-06 10:25:31.012982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.679 [2024-11-06 10:25:31.013020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.679 [2024-11-06 10:25:31.013030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.679 [2024-11-06 10:25:31.013265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.679 [2024-11-06 10:25:31.013485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.679 [2024-11-06 10:25:31.013494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.679 [2024-11-06 10:25:31.013502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.679 [2024-11-06 10:25:31.013510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.679 [2024-11-06 10:25:31.026194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.679 [2024-11-06 10:25:31.026858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.679 [2024-11-06 10:25:31.026903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.679 [2024-11-06 10:25:31.026915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.679 [2024-11-06 10:25:31.027150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.679 [2024-11-06 10:25:31.027370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.679 [2024-11-06 10:25:31.027379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.679 [2024-11-06 10:25:31.027387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.679 [2024-11-06 10:25:31.027395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.679 [2024-11-06 10:25:31.039969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.679 [2024-11-06 10:25:31.040604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.679 [2024-11-06 10:25:31.040642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.679 [2024-11-06 10:25:31.040657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.679 [2024-11-06 10:25:31.040903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.679 [2024-11-06 10:25:31.041125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.679 [2024-11-06 10:25:31.041133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.679 [2024-11-06 10:25:31.041141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.680 [2024-11-06 10:25:31.041149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.680 [2024-11-06 10:25:31.053715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.680 [2024-11-06 10:25:31.054322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.680 [2024-11-06 10:25:31.054361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.680 [2024-11-06 10:25:31.054374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.680 [2024-11-06 10:25:31.054611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.680 [2024-11-06 10:25:31.054832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.680 [2024-11-06 10:25:31.054841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.680 [2024-11-06 10:25:31.054849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.680 [2024-11-06 10:25:31.054857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.680 [2024-11-06 10:25:31.067640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.680 [2024-11-06 10:25:31.068232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.680 [2024-11-06 10:25:31.068252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.680 [2024-11-06 10:25:31.068260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.680 [2024-11-06 10:25:31.068475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.680 [2024-11-06 10:25:31.068691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.680 [2024-11-06 10:25:31.068700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.680 [2024-11-06 10:25:31.068707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.680 [2024-11-06 10:25:31.068714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.680 [2024-11-06 10:25:31.081483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.680 [2024-11-06 10:25:31.082013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.680 [2024-11-06 10:25:31.082030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.680 [2024-11-06 10:25:31.082038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.680 [2024-11-06 10:25:31.082253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.680 [2024-11-06 10:25:31.082473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.680 [2024-11-06 10:25:31.082482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.680 [2024-11-06 10:25:31.082489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.680 [2024-11-06 10:25:31.082496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.680 [2024-11-06 10:25:31.095266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.680 [2024-11-06 10:25:31.095795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.680 [2024-11-06 10:25:31.095811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.680 [2024-11-06 10:25:31.095819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.680 [2024-11-06 10:25:31.096040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.680 [2024-11-06 10:25:31.096256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.680 [2024-11-06 10:25:31.096264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.680 [2024-11-06 10:25:31.096271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.680 [2024-11-06 10:25:31.096277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.680 [2024-11-06 10:25:31.109052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.680 [2024-11-06 10:25:31.109675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.680 [2024-11-06 10:25:31.109713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.680 [2024-11-06 10:25:31.109724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.680 [2024-11-06 10:25:31.109968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.680 [2024-11-06 10:25:31.110190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.680 [2024-11-06 10:25:31.110199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.680 [2024-11-06 10:25:31.110207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.680 [2024-11-06 10:25:31.110214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.680 [2024-11-06 10:25:31.122988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.680 [2024-11-06 10:25:31.123650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.680 [2024-11-06 10:25:31.123688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.680 [2024-11-06 10:25:31.123699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.680 [2024-11-06 10:25:31.123942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.680 [2024-11-06 10:25:31.124163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.680 [2024-11-06 10:25:31.124172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.680 [2024-11-06 10:25:31.124180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.680 [2024-11-06 10:25:31.124192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.680 [2024-11-06 10:25:31.136758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.680 [2024-11-06 10:25:31.137436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.680 [2024-11-06 10:25:31.137474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.680 [2024-11-06 10:25:31.137485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.680 [2024-11-06 10:25:31.137721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.680 [2024-11-06 10:25:31.137958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.680 [2024-11-06 10:25:31.137969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.680 [2024-11-06 10:25:31.137977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.680 [2024-11-06 10:25:31.137985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.680 6035.80 IOPS, 23.58 MiB/s [2024-11-06T09:25:31.181Z] [2024-11-06 10:25:31.150563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.680 [2024-11-06 10:25:31.151203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.680 [2024-11-06 10:25:31.151240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.680 [2024-11-06 10:25:31.151251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.680 [2024-11-06 10:25:31.151487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.680 [2024-11-06 10:25:31.151707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.680 [2024-11-06 10:25:31.151716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.680 [2024-11-06 10:25:31.151724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.680 [2024-11-06 10:25:31.151732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.680 [2024-11-06 10:25:31.164312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.680 [2024-11-06 10:25:31.164880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.680 [2024-11-06 10:25:31.164918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.680 [2024-11-06 10:25:31.164930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.680 [2024-11-06 10:25:31.165168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.680 [2024-11-06 10:25:31.165388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.680 [2024-11-06 10:25:31.165396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.680 [2024-11-06 10:25:31.165404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.680 [2024-11-06 10:25:31.165412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.944 [2024-11-06 10:25:31.178190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.944 [2024-11-06 10:25:31.178825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.944 [2024-11-06 10:25:31.178869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.944 [2024-11-06 10:25:31.178881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.944 [2024-11-06 10:25:31.179116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.944 [2024-11-06 10:25:31.179336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.944 [2024-11-06 10:25:31.179345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.944 [2024-11-06 10:25:31.179353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.944 [2024-11-06 10:25:31.179361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.944 [2024-11-06 10:25:31.191945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.944 [2024-11-06 10:25:31.192508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.944 [2024-11-06 10:25:31.192546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.944 [2024-11-06 10:25:31.192557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.944 [2024-11-06 10:25:31.192793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.944 [2024-11-06 10:25:31.193020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.944 [2024-11-06 10:25:31.193030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.944 [2024-11-06 10:25:31.193038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.944 [2024-11-06 10:25:31.193046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.944 [2024-11-06 10:25:31.205834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.944 [2024-11-06 10:25:31.206515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.944 [2024-11-06 10:25:31.206553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.944 [2024-11-06 10:25:31.206564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.944 [2024-11-06 10:25:31.206798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.944 [2024-11-06 10:25:31.207028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.944 [2024-11-06 10:25:31.207037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.944 [2024-11-06 10:25:31.207046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.944 [2024-11-06 10:25:31.207054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.944 [2024-11-06 10:25:31.219624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.944 [2024-11-06 10:25:31.220250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.944 [2024-11-06 10:25:31.220288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.944 [2024-11-06 10:25:31.220304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.944 [2024-11-06 10:25:31.220539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.944 [2024-11-06 10:25:31.220759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.944 [2024-11-06 10:25:31.220768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.944 [2024-11-06 10:25:31.220776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.944 [2024-11-06 10:25:31.220784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.944 [2024-11-06 10:25:31.233565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.944 [2024-11-06 10:25:31.234255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.944 [2024-11-06 10:25:31.234293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.944 [2024-11-06 10:25:31.234304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.944 [2024-11-06 10:25:31.234540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.944 [2024-11-06 10:25:31.234760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.944 [2024-11-06 10:25:31.234769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.944 [2024-11-06 10:25:31.234777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.944 [2024-11-06 10:25:31.234785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.944 [2024-11-06 10:25:31.247359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.944 [2024-11-06 10:25:31.247950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.944 [2024-11-06 10:25:31.247988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.944 [2024-11-06 10:25:31.248000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.944 [2024-11-06 10:25:31.248239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.944 [2024-11-06 10:25:31.248459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.944 [2024-11-06 10:25:31.248468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.944 [2024-11-06 10:25:31.248476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.944 [2024-11-06 10:25:31.248483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.944 [2024-11-06 10:25:31.261262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.944 [2024-11-06 10:25:31.261934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.944 [2024-11-06 10:25:31.261973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.944 [2024-11-06 10:25:31.261984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.944 [2024-11-06 10:25:31.262219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.944 [2024-11-06 10:25:31.262445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.944 [2024-11-06 10:25:31.262454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.944 [2024-11-06 10:25:31.262462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.944 [2024-11-06 10:25:31.262470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.944 [2024-11-06 10:25:31.275046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.945 [2024-11-06 10:25:31.275681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.945 [2024-11-06 10:25:31.275718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.945 [2024-11-06 10:25:31.275729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.945 [2024-11-06 10:25:31.275973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.945 [2024-11-06 10:25:31.276194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.945 [2024-11-06 10:25:31.276202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.945 [2024-11-06 10:25:31.276211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.945 [2024-11-06 10:25:31.276218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.945 [2024-11-06 10:25:31.288783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.945 [2024-11-06 10:25:31.289436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.945 [2024-11-06 10:25:31.289474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.945 [2024-11-06 10:25:31.289485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.945 [2024-11-06 10:25:31.289720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.945 [2024-11-06 10:25:31.289957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.945 [2024-11-06 10:25:31.289966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.945 [2024-11-06 10:25:31.289974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.945 [2024-11-06 10:25:31.289983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.945 [2024-11-06 10:25:31.302564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.945 [2024-11-06 10:25:31.303199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.945 [2024-11-06 10:25:31.303237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.945 [2024-11-06 10:25:31.303248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.945 [2024-11-06 10:25:31.303483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.945 [2024-11-06 10:25:31.303704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.945 [2024-11-06 10:25:31.303713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.945 [2024-11-06 10:25:31.303725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.945 [2024-11-06 10:25:31.303733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.945 [2024-11-06 10:25:31.316309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.945 [2024-11-06 10:25:31.316937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.945 [2024-11-06 10:25:31.316975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.945 [2024-11-06 10:25:31.316988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.945 [2024-11-06 10:25:31.317226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.945 [2024-11-06 10:25:31.317448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.945 [2024-11-06 10:25:31.317457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.945 [2024-11-06 10:25:31.317465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.945 [2024-11-06 10:25:31.317473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.945 [2024-11-06 10:25:31.330050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.945 [2024-11-06 10:25:31.330634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.945 [2024-11-06 10:25:31.330653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.945 [2024-11-06 10:25:31.330661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.945 [2024-11-06 10:25:31.330884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.945 [2024-11-06 10:25:31.331102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.945 [2024-11-06 10:25:31.331110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.945 [2024-11-06 10:25:31.331118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.945 [2024-11-06 10:25:31.331125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.945 [2024-11-06 10:25:31.343888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.945 [2024-11-06 10:25:31.344466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.945 [2024-11-06 10:25:31.344482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.945 [2024-11-06 10:25:31.344490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.945 [2024-11-06 10:25:31.344705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.945 [2024-11-06 10:25:31.344927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.945 [2024-11-06 10:25:31.344936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.945 [2024-11-06 10:25:31.344943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.945 [2024-11-06 10:25:31.344949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.945 [2024-11-06 10:25:31.357709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.945 [2024-11-06 10:25:31.358356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.945 [2024-11-06 10:25:31.358395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.945 [2024-11-06 10:25:31.358406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.945 [2024-11-06 10:25:31.358641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.945 [2024-11-06 10:25:31.358872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.945 [2024-11-06 10:25:31.358881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.945 [2024-11-06 10:25:31.358889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.945 [2024-11-06 10:25:31.358897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.945 [2024-11-06 10:25:31.371462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.945 [2024-11-06 10:25:31.372125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.945 [2024-11-06 10:25:31.372163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.945 [2024-11-06 10:25:31.372174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.945 [2024-11-06 10:25:31.372409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.945 [2024-11-06 10:25:31.372630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.945 [2024-11-06 10:25:31.372639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.945 [2024-11-06 10:25:31.372647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.945 [2024-11-06 10:25:31.372655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.945 [2024-11-06 10:25:31.385230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.945 [2024-11-06 10:25:31.385887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.945 [2024-11-06 10:25:31.385926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.945 [2024-11-06 10:25:31.385939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.945 [2024-11-06 10:25:31.386175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.945 [2024-11-06 10:25:31.386395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.945 [2024-11-06 10:25:31.386404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.945 [2024-11-06 10:25:31.386413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.945 [2024-11-06 10:25:31.386421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.945 [2024-11-06 10:25:31.399015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.945 [2024-11-06 10:25:31.399643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.945 [2024-11-06 10:25:31.399681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.945 [2024-11-06 10:25:31.399696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.945 [2024-11-06 10:25:31.399941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.945 [2024-11-06 10:25:31.400163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.945 [2024-11-06 10:25:31.400171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.945 [2024-11-06 10:25:31.400180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.945 [2024-11-06 10:25:31.400188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.945 [2024-11-06 10:25:31.412754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.946 [2024-11-06 10:25:31.413414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.946 [2024-11-06 10:25:31.413452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.946 [2024-11-06 10:25:31.413464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.946 [2024-11-06 10:25:31.413699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.946 [2024-11-06 10:25:31.413929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.946 [2024-11-06 10:25:31.413939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.946 [2024-11-06 10:25:31.413947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.946 [2024-11-06 10:25:31.413955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.946 [2024-11-06 10:25:31.426527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.946 [2024-11-06 10:25:31.427194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.946 [2024-11-06 10:25:31.427232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.946 [2024-11-06 10:25:31.427243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.946 [2024-11-06 10:25:31.427478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.946 [2024-11-06 10:25:31.427698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.946 [2024-11-06 10:25:31.427707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.946 [2024-11-06 10:25:31.427715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.946 [2024-11-06 10:25:31.427722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:27.946 [2024-11-06 10:25:31.440306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:27.946 [2024-11-06 10:25:31.440886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.946 [2024-11-06 10:25:31.440907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:27.946 [2024-11-06 10:25:31.440915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:27.946 [2024-11-06 10:25:31.441131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:27.946 [2024-11-06 10:25:31.441352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:27.946 [2024-11-06 10:25:31.441360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:27.946 [2024-11-06 10:25:31.441367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:27.946 [2024-11-06 10:25:31.441373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.208 [2024-11-06 10:25:31.454146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.208 [2024-11-06 10:25:31.454756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.208 [2024-11-06 10:25:31.454794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.208 [2024-11-06 10:25:31.454805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.208 [2024-11-06 10:25:31.455050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.208 [2024-11-06 10:25:31.455271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.208 [2024-11-06 10:25:31.455280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.208 [2024-11-06 10:25:31.455289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.208 [2024-11-06 10:25:31.455297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.208 [2024-11-06 10:25:31.468068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.208 [2024-11-06 10:25:31.468690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.208 [2024-11-06 10:25:31.468728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.208 [2024-11-06 10:25:31.468738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.208 [2024-11-06 10:25:31.468982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.208 [2024-11-06 10:25:31.469204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.208 [2024-11-06 10:25:31.469212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.208 [2024-11-06 10:25:31.469221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.208 [2024-11-06 10:25:31.469229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.208 [2024-11-06 10:25:31.482185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.208 [2024-11-06 10:25:31.482860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.208 [2024-11-06 10:25:31.482904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.208 [2024-11-06 10:25:31.482915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.208 [2024-11-06 10:25:31.483151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.208 [2024-11-06 10:25:31.483370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.208 [2024-11-06 10:25:31.483379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.208 [2024-11-06 10:25:31.483391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.208 [2024-11-06 10:25:31.483400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.208 [2024-11-06 10:25:31.495983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.208 [2024-11-06 10:25:31.496643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.208 [2024-11-06 10:25:31.496681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.208 [2024-11-06 10:25:31.496692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.208 [2024-11-06 10:25:31.496936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.208 [2024-11-06 10:25:31.497158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.208 [2024-11-06 10:25:31.497167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.208 [2024-11-06 10:25:31.497175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.208 [2024-11-06 10:25:31.497182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.208 [2024-11-06 10:25:31.509760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.208 [2024-11-06 10:25:31.510437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.208 [2024-11-06 10:25:31.510475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.208 [2024-11-06 10:25:31.510486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.208 [2024-11-06 10:25:31.510721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.208 [2024-11-06 10:25:31.510950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.208 [2024-11-06 10:25:31.510960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.208 [2024-11-06 10:25:31.510968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.208 [2024-11-06 10:25:31.510976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.208 [2024-11-06 10:25:31.523541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.208 [2024-11-06 10:25:31.524188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.208 [2024-11-06 10:25:31.524226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.208 [2024-11-06 10:25:31.524237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.208 [2024-11-06 10:25:31.524472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.208 [2024-11-06 10:25:31.524693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.209 [2024-11-06 10:25:31.524702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.209 [2024-11-06 10:25:31.524710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.209 [2024-11-06 10:25:31.524718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.209 [2024-11-06 10:25:31.537295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.209 [2024-11-06 10:25:31.537962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.209 [2024-11-06 10:25:31.538000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.209 [2024-11-06 10:25:31.538011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.209 [2024-11-06 10:25:31.538246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.209 [2024-11-06 10:25:31.538467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.209 [2024-11-06 10:25:31.538476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.209 [2024-11-06 10:25:31.538484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.209 [2024-11-06 10:25:31.538492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.209 [2024-11-06 10:25:31.551070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.209 [2024-11-06 10:25:31.551617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.209 [2024-11-06 10:25:31.551655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.209 [2024-11-06 10:25:31.551666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.209 [2024-11-06 10:25:31.551910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.209 [2024-11-06 10:25:31.552131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.209 [2024-11-06 10:25:31.552140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.209 [2024-11-06 10:25:31.552148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.209 [2024-11-06 10:25:31.552156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.209 [2024-11-06 10:25:31.564930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.209 [2024-11-06 10:25:31.565584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.209 [2024-11-06 10:25:31.565622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.209 [2024-11-06 10:25:31.565633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.209 [2024-11-06 10:25:31.565878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.209 [2024-11-06 10:25:31.566100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.209 [2024-11-06 10:25:31.566109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.209 [2024-11-06 10:25:31.566118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.209 [2024-11-06 10:25:31.566126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.209 [2024-11-06 10:25:31.578703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.209 [2024-11-06 10:25:31.579405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.209 [2024-11-06 10:25:31.579444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.209 [2024-11-06 10:25:31.579461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.209 [2024-11-06 10:25:31.579700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.209 [2024-11-06 10:25:31.579928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.209 [2024-11-06 10:25:31.579938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.209 [2024-11-06 10:25:31.579946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.209 [2024-11-06 10:25:31.579954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.209 [2024-11-06 10:25:31.592531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.209 [2024-11-06 10:25:31.592993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.209 [2024-11-06 10:25:31.593031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.209 [2024-11-06 10:25:31.593043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.209 [2024-11-06 10:25:31.593279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.209 [2024-11-06 10:25:31.593499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.209 [2024-11-06 10:25:31.593508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.209 [2024-11-06 10:25:31.593516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.209 [2024-11-06 10:25:31.593524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.209 [2024-11-06 10:25:31.606321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.209 [2024-11-06 10:25:31.606943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.209 [2024-11-06 10:25:31.606982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.209 [2024-11-06 10:25:31.606995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.209 [2024-11-06 10:25:31.607234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.209 [2024-11-06 10:25:31.607455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.209 [2024-11-06 10:25:31.607464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.209 [2024-11-06 10:25:31.607472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.209 [2024-11-06 10:25:31.607481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.209 [2024-11-06 10:25:31.620261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.209 [2024-11-06 10:25:31.620839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.209 [2024-11-06 10:25:31.620859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.209 [2024-11-06 10:25:31.620872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.209 [2024-11-06 10:25:31.621089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.209 [2024-11-06 10:25:31.621310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.209 [2024-11-06 10:25:31.621318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.209 [2024-11-06 10:25:31.621325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.209 [2024-11-06 10:25:31.621332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.209 [2024-11-06 10:25:31.634104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.209 [2024-11-06 10:25:31.634738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.209 [2024-11-06 10:25:31.634777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.209 [2024-11-06 10:25:31.634788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.209 [2024-11-06 10:25:31.635034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.209 [2024-11-06 10:25:31.635255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.209 [2024-11-06 10:25:31.635264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.209 [2024-11-06 10:25:31.635272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.209 [2024-11-06 10:25:31.635280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.209 [2024-11-06 10:25:31.647847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.209 [2024-11-06 10:25:31.648516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.209 [2024-11-06 10:25:31.648555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.209 [2024-11-06 10:25:31.648566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.209 [2024-11-06 10:25:31.648801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.209 [2024-11-06 10:25:31.649029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.209 [2024-11-06 10:25:31.649040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.209 [2024-11-06 10:25:31.649048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.209 [2024-11-06 10:25:31.649056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.209 [2024-11-06 10:25:31.661639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.209 [2024-11-06 10:25:31.662175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.209 [2024-11-06 10:25:31.662196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.209 [2024-11-06 10:25:31.662203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.210 [2024-11-06 10:25:31.662421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.210 [2024-11-06 10:25:31.662636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.210 [2024-11-06 10:25:31.662644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.210 [2024-11-06 10:25:31.662658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.210 [2024-11-06 10:25:31.662665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.210 [2024-11-06 10:25:31.675489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.210 [2024-11-06 10:25:31.676008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.210 [2024-11-06 10:25:31.676047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.210 [2024-11-06 10:25:31.676059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.210 [2024-11-06 10:25:31.676297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.210 [2024-11-06 10:25:31.676517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.210 [2024-11-06 10:25:31.676525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.210 [2024-11-06 10:25:31.676533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.210 [2024-11-06 10:25:31.676541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.210 [2024-11-06 10:25:31.689325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.210 [2024-11-06 10:25:31.689989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.210 [2024-11-06 10:25:31.690028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.210 [2024-11-06 10:25:31.690039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.210 [2024-11-06 10:25:31.690274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.210 [2024-11-06 10:25:31.690495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.210 [2024-11-06 10:25:31.690504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.210 [2024-11-06 10:25:31.690512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.210 [2024-11-06 10:25:31.690520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.210 [2024-11-06 10:25:31.703163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.210 [2024-11-06 10:25:31.703884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.210 [2024-11-06 10:25:31.703922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.210 [2024-11-06 10:25:31.703934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.210 [2024-11-06 10:25:31.704171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.210 [2024-11-06 10:25:31.704391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.210 [2024-11-06 10:25:31.704400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.210 [2024-11-06 10:25:31.704408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.210 [2024-11-06 10:25:31.704416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.471 [2024-11-06 10:25:31.716999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.471 [2024-11-06 10:25:31.717605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.471 [2024-11-06 10:25:31.717643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.471 [2024-11-06 10:25:31.717654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.471 [2024-11-06 10:25:31.717900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.471 [2024-11-06 10:25:31.718121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.471 [2024-11-06 10:25:31.718131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.471 [2024-11-06 10:25:31.718139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.471 [2024-11-06 10:25:31.718147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.471 [2024-11-06 10:25:31.730941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.471 [2024-11-06 10:25:31.731600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.471 [2024-11-06 10:25:31.731638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.472 [2024-11-06 10:25:31.731649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.472 [2024-11-06 10:25:31.731893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.472 [2024-11-06 10:25:31.732115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.472 [2024-11-06 10:25:31.732124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.472 [2024-11-06 10:25:31.732132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.472 [2024-11-06 10:25:31.732140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4090063 Killed "${NVMF_APP[@]}" "$@" 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:28.472 [2024-11-06 10:25:31.744728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.472 [2024-11-06 10:25:31.745311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.472 [2024-11-06 10:25:31.745330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.472 [2024-11-06 10:25:31.745338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.472 [2024-11-06 10:25:31.745554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.472 [2024-11-06 10:25:31.745770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.472 [2024-11-06 10:25:31.745779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.472 [2024-11-06 10:25:31.745786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.472 [2024-11-06 10:25:31.745797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4091573 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4091573 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 4091573 ']' 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:28.472 10:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:28.472 [2024-11-06 10:25:31.758596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.472 [2024-11-06 10:25:31.759227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.472 [2024-11-06 10:25:31.759265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.472 [2024-11-06 10:25:31.759277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.472 [2024-11-06 10:25:31.759512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.472 [2024-11-06 10:25:31.759734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.472 [2024-11-06 10:25:31.759742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.472 [2024-11-06 10:25:31.759751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.472 [2024-11-06 10:25:31.759759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.472 [2024-11-06 10:25:31.772336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.472 [2024-11-06 10:25:31.772878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.472 [2024-11-06 10:25:31.772898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.472 [2024-11-06 10:25:31.772906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.472 [2024-11-06 10:25:31.773123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.472 [2024-11-06 10:25:31.773340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.472 [2024-11-06 10:25:31.773349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.472 [2024-11-06 10:25:31.773356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.472 [2024-11-06 10:25:31.773363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.472 [2024-11-06 10:25:31.786135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.472 [2024-11-06 10:25:31.786539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.472 [2024-11-06 10:25:31.786560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.472 [2024-11-06 10:25:31.786568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.472 [2024-11-06 10:25:31.786784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.472 [2024-11-06 10:25:31.787006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.472 [2024-11-06 10:25:31.787015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.472 [2024-11-06 10:25:31.787022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.472 [2024-11-06 10:25:31.787028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.472 [2024-11-06 10:25:31.800026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.472 [2024-11-06 10:25:31.800468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.472 [2024-11-06 10:25:31.800484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.472 [2024-11-06 10:25:31.800492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.472 [2024-11-06 10:25:31.800707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.472 [2024-11-06 10:25:31.800928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.472 [2024-11-06 10:25:31.800936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.472 [2024-11-06 10:25:31.800943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.472 [2024-11-06 10:25:31.800950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.472 [2024-11-06 10:25:31.813709] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:28.472 [2024-11-06 10:25:31.813754] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.472 [2024-11-06 10:25:31.813927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.472 [2024-11-06 10:25:31.814587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.472 [2024-11-06 10:25:31.814626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.472 [2024-11-06 10:25:31.814638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.472 [2024-11-06 10:25:31.814882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.472 [2024-11-06 10:25:31.815104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.472 [2024-11-06 10:25:31.815114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.472 [2024-11-06 10:25:31.815123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.472 [2024-11-06 10:25:31.815131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.472 [2024-11-06 10:25:31.827702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.472 [2024-11-06 10:25:31.828263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.472 [2024-11-06 10:25:31.828288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.472 [2024-11-06 10:25:31.828296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.472 [2024-11-06 10:25:31.828513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.472 [2024-11-06 10:25:31.828729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.472 [2024-11-06 10:25:31.828738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.472 [2024-11-06 10:25:31.828745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.472 [2024-11-06 10:25:31.828752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.472 [2024-11-06 10:25:31.841531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.472 [2024-11-06 10:25:31.842067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.472 [2024-11-06 10:25:31.842085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.472 [2024-11-06 10:25:31.842092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.473 [2024-11-06 10:25:31.842308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.473 [2024-11-06 10:25:31.842524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.473 [2024-11-06 10:25:31.842533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.473 [2024-11-06 10:25:31.842540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.473 [2024-11-06 10:25:31.842547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.473 [2024-11-06 10:25:31.855325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.473 [2024-11-06 10:25:31.855885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.473 [2024-11-06 10:25:31.855905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.473 [2024-11-06 10:25:31.855914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.473 [2024-11-06 10:25:31.856131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.473 [2024-11-06 10:25:31.856347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.473 [2024-11-06 10:25:31.856355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.473 [2024-11-06 10:25:31.856363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.473 [2024-11-06 10:25:31.856370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.473 [2024-11-06 10:25:31.869144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.473 [2024-11-06 10:25:31.869673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.473 [2024-11-06 10:25:31.869711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.473 [2024-11-06 10:25:31.869722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.473 [2024-11-06 10:25:31.869970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.473 [2024-11-06 10:25:31.870192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.473 [2024-11-06 10:25:31.870200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.473 [2024-11-06 10:25:31.870209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.473 [2024-11-06 10:25:31.870217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.473 [2024-11-06 10:25:31.883001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.473 [2024-11-06 10:25:31.883541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.473 [2024-11-06 10:25:31.883561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.473 [2024-11-06 10:25:31.883569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.473 [2024-11-06 10:25:31.883786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.473 [2024-11-06 10:25:31.884007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.473 [2024-11-06 10:25:31.884015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.473 [2024-11-06 10:25:31.884022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.473 [2024-11-06 10:25:31.884030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.473 [2024-11-06 10:25:31.896810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.473 [2024-11-06 10:25:31.897383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.473 [2024-11-06 10:25:31.897401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.473 [2024-11-06 10:25:31.897409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.473 [2024-11-06 10:25:31.897626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.473 [2024-11-06 10:25:31.897842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.473 [2024-11-06 10:25:31.897850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.473 [2024-11-06 10:25:31.897857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.473 [2024-11-06 10:25:31.897869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.473 [2024-11-06 10:25:31.908337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:28.473 [2024-11-06 10:25:31.910651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.473 [2024-11-06 10:25:31.911305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.473 [2024-11-06 10:25:31.911344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.473 [2024-11-06 10:25:31.911355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.473 [2024-11-06 10:25:31.911591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.473 [2024-11-06 10:25:31.911811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.473 [2024-11-06 10:25:31.911825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.473 [2024-11-06 10:25:31.911834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.473 [2024-11-06 10:25:31.911842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.473 [2024-11-06 10:25:31.924434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.473 [2024-11-06 10:25:31.924999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.473 [2024-11-06 10:25:31.925037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.473 [2024-11-06 10:25:31.925049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.473 [2024-11-06 10:25:31.925288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.473 [2024-11-06 10:25:31.925508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.473 [2024-11-06 10:25:31.925517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.473 [2024-11-06 10:25:31.925526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.473 [2024-11-06 10:25:31.925534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.473 [2024-11-06 10:25:31.937802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.473 [2024-11-06 10:25:31.937825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.473 [2024-11-06 10:25:31.937831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.473 [2024-11-06 10:25:31.937837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.473 [2024-11-06 10:25:31.937842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.473 [2024-11-06 10:25:31.938322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.473 [2024-11-06 10:25:31.938876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.473 [2024-11-06 10:25:31.938898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.473 [2024-11-06 10:25:31.938906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.473 [2024-11-06 10:25:31.938882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:28.473 [2024-11-06 10:25:31.938985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.473 [2024-11-06 10:25:31.938986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:28.473 [2024-11-06 10:25:31.939123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.473 [2024-11-06 10:25:31.939340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.473 [2024-11-06 10:25:31.939348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.473 [2024-11-06 10:25:31.939356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.473 [2024-11-06 10:25:31.939363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.473 [2024-11-06 10:25:31.952149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.473 [2024-11-06 10:25:31.952713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.473 [2024-11-06 10:25:31.952730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.473 [2024-11-06 10:25:31.952738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.473 [2024-11-06 10:25:31.952960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.473 [2024-11-06 10:25:31.953177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.473 [2024-11-06 10:25:31.953186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.473 [2024-11-06 10:25:31.953193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.473 [2024-11-06 10:25:31.953200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.473 [2024-11-06 10:25:31.965977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.473 [2024-11-06 10:25:31.966559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.473 [2024-11-06 10:25:31.966601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.473 [2024-11-06 10:25:31.966615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.473 [2024-11-06 10:25:31.966857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.473 [2024-11-06 10:25:31.967087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.474 [2024-11-06 10:25:31.967097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.474 [2024-11-06 10:25:31.967106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.474 [2024-11-06 10:25:31.967114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.736 [2024-11-06 10:25:31.979897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.736 [2024-11-06 10:25:31.980564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.736 [2024-11-06 10:25:31.980605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.736 [2024-11-06 10:25:31.980616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.736 [2024-11-06 10:25:31.980855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.736 [2024-11-06 10:25:31.981084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.736 [2024-11-06 10:25:31.981093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.736 [2024-11-06 10:25:31.981102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.736 [2024-11-06 10:25:31.981110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.736 [2024-11-06 10:25:31.993694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.736 [2024-11-06 10:25:31.994235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.736 [2024-11-06 10:25:31.994273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.736 [2024-11-06 10:25:31.994285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.736 [2024-11-06 10:25:31.994526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.736 [2024-11-06 10:25:31.994747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.736 [2024-11-06 10:25:31.994756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.736 [2024-11-06 10:25:31.994764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.736 [2024-11-06 10:25:31.994772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.736 [2024-11-06 10:25:32.007574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.736 [2024-11-06 10:25:32.008245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.736 [2024-11-06 10:25:32.008284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.736 [2024-11-06 10:25:32.008295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.736 [2024-11-06 10:25:32.008531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.736 [2024-11-06 10:25:32.008751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.736 [2024-11-06 10:25:32.008760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.736 [2024-11-06 10:25:32.008769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.736 [2024-11-06 10:25:32.008777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.736 [2024-11-06 10:25:32.021351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.736 [2024-11-06 10:25:32.022094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.736 [2024-11-06 10:25:32.022133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.736 [2024-11-06 10:25:32.022144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.736 [2024-11-06 10:25:32.022380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.736 [2024-11-06 10:25:32.022600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.736 [2024-11-06 10:25:32.022609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.736 [2024-11-06 10:25:32.022617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.736 [2024-11-06 10:25:32.022624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.736 [2024-11-06 10:25:32.035198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.736 [2024-11-06 10:25:32.035787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.736 [2024-11-06 10:25:32.035807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.736 [2024-11-06 10:25:32.035815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.736 [2024-11-06 10:25:32.036036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.736 [2024-11-06 10:25:32.036254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.736 [2024-11-06 10:25:32.036266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.736 [2024-11-06 10:25:32.036274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.736 [2024-11-06 10:25:32.036280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.736 [2024-11-06 10:25:32.049156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.736 [2024-11-06 10:25:32.049631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.736 [2024-11-06 10:25:32.049649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.736 [2024-11-06 10:25:32.049657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.736 [2024-11-06 10:25:32.049882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.736 [2024-11-06 10:25:32.050100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.736 [2024-11-06 10:25:32.050108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.736 [2024-11-06 10:25:32.050115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.736 [2024-11-06 10:25:32.050122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.736 [2024-11-06 10:25:32.062940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.736 [2024-11-06 10:25:32.063451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.736 [2024-11-06 10:25:32.063489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.736 [2024-11-06 10:25:32.063500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.737 [2024-11-06 10:25:32.063735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.737 [2024-11-06 10:25:32.063963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.737 [2024-11-06 10:25:32.063973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.737 [2024-11-06 10:25:32.063981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.737 [2024-11-06 10:25:32.063989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.737 [2024-11-06 10:25:32.076765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.737 [2024-11-06 10:25:32.077335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.737 [2024-11-06 10:25:32.077375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.737 [2024-11-06 10:25:32.077388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.737 [2024-11-06 10:25:32.077627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.737 [2024-11-06 10:25:32.077847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.737 [2024-11-06 10:25:32.077856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.737 [2024-11-06 10:25:32.077871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.737 [2024-11-06 10:25:32.077884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.737 [2024-11-06 10:25:32.090671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.737 [2024-11-06 10:25:32.091225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.737 [2024-11-06 10:25:32.091246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.737 [2024-11-06 10:25:32.091254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.737 [2024-11-06 10:25:32.091470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.737 [2024-11-06 10:25:32.091687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.737 [2024-11-06 10:25:32.091695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.737 [2024-11-06 10:25:32.091703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.737 [2024-11-06 10:25:32.091710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.737 [2024-11-06 10:25:32.104490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.737 [2024-11-06 10:25:32.105168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.737 [2024-11-06 10:25:32.105207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.737 [2024-11-06 10:25:32.105219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.737 [2024-11-06 10:25:32.105454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.737 [2024-11-06 10:25:32.105675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.737 [2024-11-06 10:25:32.105684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.737 [2024-11-06 10:25:32.105693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.737 [2024-11-06 10:25:32.105701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.737 [2024-11-06 10:25:32.118278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.737 [2024-11-06 10:25:32.118847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.737 [2024-11-06 10:25:32.118892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.737 [2024-11-06 10:25:32.118903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.737 [2024-11-06 10:25:32.119138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.737 [2024-11-06 10:25:32.119358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.737 [2024-11-06 10:25:32.119367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.737 [2024-11-06 10:25:32.119376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.737 [2024-11-06 10:25:32.119384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.737 [2024-11-06 10:25:32.132163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.737 [2024-11-06 10:25:32.132820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.737 [2024-11-06 10:25:32.132858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.737 [2024-11-06 10:25:32.132877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.737 [2024-11-06 10:25:32.133113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.737 [2024-11-06 10:25:32.133333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.737 [2024-11-06 10:25:32.133342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.737 [2024-11-06 10:25:32.133350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.737 [2024-11-06 10:25:32.133358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.737 5029.83 IOPS, 19.65 MiB/s [2024-11-06T09:25:32.238Z] [2024-11-06 10:25:32.147584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.737 [2024-11-06 10:25:32.148239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.737 [2024-11-06 10:25:32.148277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.737 [2024-11-06 10:25:32.148288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.737 [2024-11-06 10:25:32.148524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.737 [2024-11-06 10:25:32.148744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.737 [2024-11-06 10:25:32.148752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.737 [2024-11-06 10:25:32.148760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.737 [2024-11-06 10:25:32.148768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.737 [2024-11-06 10:25:32.161352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.737 [2024-11-06 10:25:32.161979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.737 [2024-11-06 10:25:32.162017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.737 [2024-11-06 10:25:32.162029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.737 [2024-11-06 10:25:32.162266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.737 [2024-11-06 10:25:32.162486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.737 [2024-11-06 10:25:32.162495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.737 [2024-11-06 10:25:32.162505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.737 [2024-11-06 10:25:32.162513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.737 [2024-11-06 10:25:32.175089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.737 [2024-11-06 10:25:32.175729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.737 [2024-11-06 10:25:32.175767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.737 [2024-11-06 10:25:32.175778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.737 [2024-11-06 10:25:32.176027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.737 [2024-11-06 10:25:32.176248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.737 [2024-11-06 10:25:32.176257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.737 [2024-11-06 10:25:32.176265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.737 [2024-11-06 10:25:32.176273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.737 [2024-11-06 10:25:32.188843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.737 [2024-11-06 10:25:32.189483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.737 [2024-11-06 10:25:32.189522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.737 [2024-11-06 10:25:32.189533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.737 [2024-11-06 10:25:32.189769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.737 [2024-11-06 10:25:32.190007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.737 [2024-11-06 10:25:32.190017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.737 [2024-11-06 10:25:32.190025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.737 [2024-11-06 10:25:32.190033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.737 [2024-11-06 10:25:32.202612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.737 [2024-11-06 10:25:32.203144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.738 [2024-11-06 10:25:32.203164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.738 [2024-11-06 10:25:32.203172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.738 [2024-11-06 10:25:32.203388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.738 [2024-11-06 10:25:32.203604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.738 [2024-11-06 10:25:32.203612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.738 [2024-11-06 10:25:32.203620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.738 [2024-11-06 10:25:32.203627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.738 [2024-11-06 10:25:32.216405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.738 [2024-11-06 10:25:32.217141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.738 [2024-11-06 10:25:32.217180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.738 [2024-11-06 10:25:32.217191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.738 [2024-11-06 10:25:32.217426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.738 [2024-11-06 10:25:32.217647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.738 [2024-11-06 10:25:32.217662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.738 [2024-11-06 10:25:32.217670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.738 [2024-11-06 10:25:32.217677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.738 [2024-11-06 10:25:32.230256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.738 [2024-11-06 10:25:32.230936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.738 [2024-11-06 10:25:32.230975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.738 [2024-11-06 10:25:32.230987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.738 [2024-11-06 10:25:32.231225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.738 [2024-11-06 10:25:32.231447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.738 [2024-11-06 10:25:32.231456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.738 [2024-11-06 10:25:32.231464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.738 [2024-11-06 10:25:32.231472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.999 [2024-11-06 10:25:32.244050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.999 [2024-11-06 10:25:32.244719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-11-06 10:25:32.244758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.999 [2024-11-06 10:25:32.244769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.999 [2024-11-06 10:25:32.245012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.999 [2024-11-06 10:25:32.245233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.999 [2024-11-06 10:25:32.245242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.999 [2024-11-06 10:25:32.245250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.999 [2024-11-06 10:25:32.245258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.999 [2024-11-06 10:25:32.257825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.999 [2024-11-06 10:25:32.258518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-11-06 10:25:32.258557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.999 [2024-11-06 10:25:32.258568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.999 [2024-11-06 10:25:32.258803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.999 [2024-11-06 10:25:32.259034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.999 [2024-11-06 10:25:32.259043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.999 [2024-11-06 10:25:32.259052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.999 [2024-11-06 10:25:32.259064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.999 [2024-11-06 10:25:32.271632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.999 [2024-11-06 10:25:32.272286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-11-06 10:25:32.272325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.999 [2024-11-06 10:25:32.272339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.999 [2024-11-06 10:25:32.272576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.999 [2024-11-06 10:25:32.272796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.999 [2024-11-06 10:25:32.272805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.999 [2024-11-06 10:25:32.272813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.999 [2024-11-06 10:25:32.272821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.999 [2024-11-06 10:25:32.285398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.999 [2024-11-06 10:25:32.285990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-11-06 10:25:32.286010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.999 [2024-11-06 10:25:32.286018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:28.999 [2024-11-06 10:25:32.286234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:28.999 [2024-11-06 10:25:32.286449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:28.999 [2024-11-06 10:25:32.286457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:28.999 [2024-11-06 10:25:32.286465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:28.999 [2024-11-06 10:25:32.286472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:28.999 [2024-11-06 10:25:32.299255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:28.999 [2024-11-06 10:25:32.299804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.999 [2024-11-06 10:25:32.299821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:28.999 [2024-11-06 10:25:32.299829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.000 [2024-11-06 10:25:32.300049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.000 [2024-11-06 10:25:32.300265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.000 [2024-11-06 10:25:32.300273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.000 [2024-11-06 10:25:32.300280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.000 [2024-11-06 10:25:32.300286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.000 [2024-11-06 10:25:32.313063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.000 [2024-11-06 10:25:32.313510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-11-06 10:25:32.313526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.000 [2024-11-06 10:25:32.313534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.000 [2024-11-06 10:25:32.313749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.000 [2024-11-06 10:25:32.313970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.000 [2024-11-06 10:25:32.313979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.000 [2024-11-06 10:25:32.313986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.000 [2024-11-06 10:25:32.313993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.000 [2024-11-06 10:25:32.326965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.000 [2024-11-06 10:25:32.327606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-11-06 10:25:32.327645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.000 [2024-11-06 10:25:32.327658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.000 [2024-11-06 10:25:32.327906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.000 [2024-11-06 10:25:32.328127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.000 [2024-11-06 10:25:32.328136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.000 [2024-11-06 10:25:32.328145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.000 [2024-11-06 10:25:32.328153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.000 [2024-11-06 10:25:32.340723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.000 [2024-11-06 10:25:32.341284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-11-06 10:25:32.341303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.000 [2024-11-06 10:25:32.341311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.000 [2024-11-06 10:25:32.341527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.000 [2024-11-06 10:25:32.341744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.000 [2024-11-06 10:25:32.341752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.000 [2024-11-06 10:25:32.341759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.000 [2024-11-06 10:25:32.341766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.000 [2024-11-06 10:25:32.354535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.000 [2024-11-06 10:25:32.355105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-11-06 10:25:32.355143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.000 [2024-11-06 10:25:32.355156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.000 [2024-11-06 10:25:32.355397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.000 [2024-11-06 10:25:32.355619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.000 [2024-11-06 10:25:32.355628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.000 [2024-11-06 10:25:32.355636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.000 [2024-11-06 10:25:32.355643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.000 [2024-11-06 10:25:32.368417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.000 [2024-11-06 10:25:32.368952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-11-06 10:25:32.368971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.000 [2024-11-06 10:25:32.368979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.000 [2024-11-06 10:25:32.369196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.000 [2024-11-06 10:25:32.369412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.000 [2024-11-06 10:25:32.369419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.000 [2024-11-06 10:25:32.369427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.000 [2024-11-06 10:25:32.369433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.000 [2024-11-06 10:25:32.382195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.000 [2024-11-06 10:25:32.382735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-11-06 10:25:32.382773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.000 [2024-11-06 10:25:32.382784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.000 [2024-11-06 10:25:32.383027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.000 [2024-11-06 10:25:32.383248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.000 [2024-11-06 10:25:32.383257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.000 [2024-11-06 10:25:32.383266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.000 [2024-11-06 10:25:32.383274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.000 [2024-11-06 10:25:32.396050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.000 [2024-11-06 10:25:32.396577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-11-06 10:25:32.396615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.000 [2024-11-06 10:25:32.396626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.000 [2024-11-06 10:25:32.396868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.000 [2024-11-06 10:25:32.397090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.000 [2024-11-06 10:25:32.397104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.000 [2024-11-06 10:25:32.397112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.000 [2024-11-06 10:25:32.397120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.000 [2024-11-06 10:25:32.409899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.000 [2024-11-06 10:25:32.410564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-11-06 10:25:32.410603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.000 [2024-11-06 10:25:32.410614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.000 [2024-11-06 10:25:32.410850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.000 [2024-11-06 10:25:32.411079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.000 [2024-11-06 10:25:32.411089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.000 [2024-11-06 10:25:32.411097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.000 [2024-11-06 10:25:32.411105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.000 [2024-11-06 10:25:32.423666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.000 [2024-11-06 10:25:32.424300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-11-06 10:25:32.424338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.000 [2024-11-06 10:25:32.424349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.000 [2024-11-06 10:25:32.424584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.000 [2024-11-06 10:25:32.424805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.000 [2024-11-06 10:25:32.424813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.000 [2024-11-06 10:25:32.424821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.000 [2024-11-06 10:25:32.424829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.000 [2024-11-06 10:25:32.437407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.000 [2024-11-06 10:25:32.438001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.000 [2024-11-06 10:25:32.438040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.001 [2024-11-06 10:25:32.438052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.001 [2024-11-06 10:25:32.438291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.001 [2024-11-06 10:25:32.438511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.001 [2024-11-06 10:25:32.438521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.001 [2024-11-06 10:25:32.438529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.001 [2024-11-06 10:25:32.438541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.001 [2024-11-06 10:25:32.451327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.001 [2024-11-06 10:25:32.452013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-11-06 10:25:32.452051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.001 [2024-11-06 10:25:32.452062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.001 [2024-11-06 10:25:32.452298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.001 [2024-11-06 10:25:32.452518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.001 [2024-11-06 10:25:32.452526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.001 [2024-11-06 10:25:32.452535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.001 [2024-11-06 10:25:32.452542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.001 [2024-11-06 10:25:32.465121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.001 [2024-11-06 10:25:32.465807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-11-06 10:25:32.465845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.001 [2024-11-06 10:25:32.465857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.001 [2024-11-06 10:25:32.466104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.001 [2024-11-06 10:25:32.466325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.001 [2024-11-06 10:25:32.466334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.001 [2024-11-06 10:25:32.466342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.001 [2024-11-06 10:25:32.466351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.001 [2024-11-06 10:25:32.478921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.001 [2024-11-06 10:25:32.479607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-11-06 10:25:32.479645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.001 [2024-11-06 10:25:32.479656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.001 [2024-11-06 10:25:32.479899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.001 [2024-11-06 10:25:32.480332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.001 [2024-11-06 10:25:32.480342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.001 [2024-11-06 10:25:32.480351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.001 [2024-11-06 10:25:32.480359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.001 [2024-11-06 10:25:32.492737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.001 [2024-11-06 10:25:32.493257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.001 [2024-11-06 10:25:32.493296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.001 [2024-11-06 10:25:32.493307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.001 [2024-11-06 10:25:32.493544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.001 [2024-11-06 10:25:32.493764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.001 [2024-11-06 10:25:32.493773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.001 [2024-11-06 10:25:32.493781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.001 [2024-11-06 10:25:32.493789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.263 [2024-11-06 10:25:32.506594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.263 [2024-11-06 10:25:32.507272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.263 [2024-11-06 10:25:32.507311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.263 [2024-11-06 10:25:32.507322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.263 [2024-11-06 10:25:32.507558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.263 [2024-11-06 10:25:32.507778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.263 [2024-11-06 10:25:32.507787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.263 [2024-11-06 10:25:32.507795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.263 [2024-11-06 10:25:32.507803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.263 [2024-11-06 10:25:32.520378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.263 [2024-11-06 10:25:32.521083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.263 [2024-11-06 10:25:32.521121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.263 [2024-11-06 10:25:32.521133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.263 [2024-11-06 10:25:32.521368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.263 [2024-11-06 10:25:32.521589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.263 [2024-11-06 10:25:32.521598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.263 [2024-11-06 10:25:32.521607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.263 [2024-11-06 10:25:32.521616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.263 [2024-11-06 10:25:32.534192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.263 [2024-11-06 10:25:32.534854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.263 [2024-11-06 10:25:32.534900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.263 [2024-11-06 10:25:32.534911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.263 [2024-11-06 10:25:32.535151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.263 [2024-11-06 10:25:32.535373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.263 [2024-11-06 10:25:32.535382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.263 [2024-11-06 10:25:32.535390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.263 [2024-11-06 10:25:32.535398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.263 [2024-11-06 10:25:32.547968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.263 [2024-11-06 10:25:32.548653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.263 [2024-11-06 10:25:32.548692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.263 [2024-11-06 10:25:32.548703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.263 [2024-11-06 10:25:32.548947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.263 [2024-11-06 10:25:32.549168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.263 [2024-11-06 10:25:32.549177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.263 [2024-11-06 10:25:32.549185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.263 [2024-11-06 10:25:32.549193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.263 [2024-11-06 10:25:32.561758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.263 [2024-11-06 10:25:32.562386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.263 [2024-11-06 10:25:32.562424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.263 [2024-11-06 10:25:32.562436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.263 [2024-11-06 10:25:32.562673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.263 [2024-11-06 10:25:32.562902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.263 [2024-11-06 10:25:32.562912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.263 [2024-11-06 10:25:32.562920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.263 [2024-11-06 10:25:32.562929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.263 [2024-11-06 10:25:32.575494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.263 [2024-11-06 10:25:32.576184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.263 [2024-11-06 10:25:32.576223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.263 [2024-11-06 10:25:32.576234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.263 [2024-11-06 10:25:32.576469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.263 [2024-11-06 10:25:32.576689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.263 [2024-11-06 10:25:32.576703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.263 [2024-11-06 10:25:32.576711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.263 [2024-11-06 10:25:32.576719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.263 [2024-11-06 10:25:32.589287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.263 [2024-11-06 10:25:32.589953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.263 [2024-11-06 10:25:32.589991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.263 [2024-11-06 10:25:32.590004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.263 [2024-11-06 10:25:32.590242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.263 [2024-11-06 10:25:32.590471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.263 [2024-11-06 10:25:32.590480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.263 [2024-11-06 10:25:32.590489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.263 [2024-11-06 10:25:32.590497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.263 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:29.263 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:33:29.263 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:29.263 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:29.263 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.263 [2024-11-06 10:25:32.603089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.263 [2024-11-06 10:25:32.603644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.263 [2024-11-06 10:25:32.603663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.263 [2024-11-06 10:25:32.603671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.263 [2024-11-06 10:25:32.603894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.263 [2024-11-06 10:25:32.604113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.263 [2024-11-06 10:25:32.604130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.263 [2024-11-06 10:25:32.604137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.263 [2024-11-06 10:25:32.604144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.263 [2024-11-06 10:25:32.616908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.263 [2024-11-06 10:25:32.617579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.263 [2024-11-06 10:25:32.617617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.263 [2024-11-06 10:25:32.617628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.264 [2024-11-06 10:25:32.617872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.264 [2024-11-06 10:25:32.618099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.264 [2024-11-06 10:25:32.618108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.264 [2024-11-06 10:25:32.618116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.264 [2024-11-06 10:25:32.618124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.264 [2024-11-06 10:25:32.630691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.264 [2024-11-06 10:25:32.631372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.264 [2024-11-06 10:25:32.631411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.264 [2024-11-06 10:25:32.631422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.264 [2024-11-06 10:25:32.631658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.264 [2024-11-06 10:25:32.631887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.264 [2024-11-06 10:25:32.631897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.264 [2024-11-06 10:25:32.631905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.264 [2024-11-06 10:25:32.631912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.264 [2024-11-06 10:25:32.644479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.264 [2024-11-06 10:25:32.645086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.264 [2024-11-06 10:25:32.645124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.264 [2024-11-06 10:25:32.645136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.264 [2024-11-06 10:25:32.645373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.264 [2024-11-06 10:25:32.645594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.264 [2024-11-06 10:25:32.645602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.264 [2024-11-06 10:25:32.645611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.264 [2024-11-06 10:25:32.645619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.264 [2024-11-06 10:25:32.646831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.264 [2024-11-06 10:25:32.658398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.264 [2024-11-06 10:25:32.658847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.264 [2024-11-06 10:25:32.658871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.264 [2024-11-06 10:25:32.658880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.264 [2024-11-06 10:25:32.659096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.264 [2024-11-06 10:25:32.659312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.264 [2024-11-06 10:25:32.659320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.264 [2024-11-06 10:25:32.659328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.264 [2024-11-06 10:25:32.659335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.264 [2024-11-06 10:25:32.672303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.264 [2024-11-06 10:25:32.672954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.264 [2024-11-06 10:25:32.672993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.264 [2024-11-06 10:25:32.673006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.264 [2024-11-06 10:25:32.673243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.264 [2024-11-06 10:25:32.673465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.264 [2024-11-06 10:25:32.673473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.264 [2024-11-06 10:25:32.673482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.264 [2024-11-06 10:25:32.673490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.264 Malloc0 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.264 [2024-11-06 10:25:32.686074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.264 [2024-11-06 10:25:32.686736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.264 [2024-11-06 10:25:32.686774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.264 [2024-11-06 10:25:32.686786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.264 [2024-11-06 10:25:32.687030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.264 [2024-11-06 10:25:32.687251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.264 [2024-11-06 10:25:32.687260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.264 [2024-11-06 10:25:32.687269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.264 [2024-11-06 10:25:32.687281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.264 [2024-11-06 10:25:32.699955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.264 [2024-11-06 10:25:32.700547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.264 [2024-11-06 10:25:32.700567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19086a0 with addr=10.0.0.2, port=4420 00:33:29.264 [2024-11-06 10:25:32.700575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19086a0 is same with the state(6) to be set 00:33:29.264 [2024-11-06 10:25:32.700792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19086a0 (9): Bad file descriptor 00:33:29.264 [2024-11-06 10:25:32.701014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:29.264 [2024-11-06 10:25:32.701022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:29.264 [2024-11-06 10:25:32.701030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.264 [2024-11-06 10:25:32.701036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.264 [2024-11-06 10:25:32.711646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.264 [2024-11-06 10:25:32.713821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.264 10:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4090510 00:33:29.264 [2024-11-06 10:25:32.740949] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:33:30.772 5003.57 IOPS, 19.55 MiB/s [2024-11-06T09:25:35.215Z] 5797.12 IOPS, 22.65 MiB/s [2024-11-06T09:25:36.161Z] 6409.78 IOPS, 25.04 MiB/s [2024-11-06T09:25:37.221Z] 6890.30 IOPS, 26.92 MiB/s [2024-11-06T09:25:38.602Z] 7285.00 IOPS, 28.46 MiB/s [2024-11-06T09:25:39.172Z] 7610.75 IOPS, 29.73 MiB/s [2024-11-06T09:25:40.565Z] 7882.77 IOPS, 30.79 MiB/s [2024-11-06T09:25:41.507Z] 8112.71 IOPS, 31.69 MiB/s [2024-11-06T09:25:41.507Z] 8349.73 IOPS, 32.62 MiB/s 00:33:38.006 Latency(us) 00:33:38.006 [2024-11-06T09:25:41.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.006 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:38.006 Verification LBA range: start 0x0 length 0x4000 00:33:38.006 Nvme1n1 : 15.05 8330.10 32.54 9349.38 0.00 7200.99 778.24 217579.52 00:33:38.006 [2024-11-06T09:25:41.507Z] =================================================================================================================== 00:33:38.006 [2024-11-06T09:25:41.507Z] Total : 8330.10 32.54 9349.38 0.00 7200.99 778.24 217579.52 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:38.006 rmmod nvme_tcp 00:33:38.006 rmmod nvme_fabrics 00:33:38.006 rmmod nvme_keyring 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 4091573 ']' 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 4091573 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 4091573 ']' 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 4091573 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4091573 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4091573' 00:33:38.006 killing process with pid 4091573 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 4091573 00:33:38.006 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 4091573 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.266 10:25:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.175 10:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:40.436 00:33:40.436 real 0m29.172s 00:33:40.436 user 1m2.936s 00:33:40.436 sys 0m8.312s 00:33:40.436 10:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:40.436 10:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:40.436 ************************************ 00:33:40.436 END TEST nvmf_bdevperf 00:33:40.436 ************************************ 00:33:40.436 10:25:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:40.436 10:25:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:40.436 10:25:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:40.436 10:25:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.436 ************************************ 00:33:40.436 START TEST nvmf_target_disconnect 00:33:40.436 ************************************ 00:33:40.436 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:40.436 * Looking for test storage... 00:33:40.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:40.436 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:40.436 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:33:40.436 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:40.436 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:40.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.437 --rc genhtml_branch_coverage=1 00:33:40.437 --rc genhtml_function_coverage=1 00:33:40.437 --rc genhtml_legend=1 00:33:40.437 --rc geninfo_all_blocks=1 00:33:40.437 --rc geninfo_unexecuted_blocks=1 00:33:40.437 00:33:40.437 ' 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:40.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.437 --rc genhtml_branch_coverage=1 00:33:40.437 --rc genhtml_function_coverage=1 00:33:40.437 --rc genhtml_legend=1 00:33:40.437 --rc geninfo_all_blocks=1 00:33:40.437 --rc geninfo_unexecuted_blocks=1 00:33:40.437 00:33:40.437 ' 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:40.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.437 --rc genhtml_branch_coverage=1 00:33:40.437 --rc genhtml_function_coverage=1 00:33:40.437 --rc genhtml_legend=1 00:33:40.437 --rc geninfo_all_blocks=1 00:33:40.437 --rc geninfo_unexecuted_blocks=1 00:33:40.437 00:33:40.437 ' 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:40.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.437 --rc genhtml_branch_coverage=1 00:33:40.437 --rc genhtml_function_coverage=1 00:33:40.437 --rc genhtml_legend=1 00:33:40.437 --rc geninfo_all_blocks=1 00:33:40.437 --rc geninfo_unexecuted_blocks=1 00:33:40.437 00:33:40.437 ' 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.437 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:40.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:40.698 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:33:40.699 10:25:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:48.835 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:48.835 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:48.835 Found net devices under 0000:31:00.0: cvl_0_0 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:48.835 Found net devices under 0000:31:00.1: cvl_0_1 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:48.835 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:48.836 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:48.836 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:48.836 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:48.836 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:48.836 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:48.836 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:48.836 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:48.836 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:48.836 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:48.836 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:48.836 10:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:48.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:48.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:33:48.836 00:33:48.836 --- 10.0.0.2 ping statistics --- 00:33:48.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.836 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:48.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:48.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:33:48.836 00:33:48.836 --- 10.0.0.1 ping statistics --- 00:33:48.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.836 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:48.836 ************************************ 00:33:48.836 START TEST nvmf_target_disconnect_tc1 00:33:48.836 ************************************ 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:48.836 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:49.097 [2024-11-06 10:25:52.425923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.097 [2024-11-06 10:25:52.425995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bacf0 with addr=10.0.0.2, port=4420 00:33:49.097 [2024-11-06 10:25:52.426025] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:49.097 [2024-11-06 10:25:52.426036] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:49.097 [2024-11-06 10:25:52.426044] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:33:49.097 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:49.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:49.097 Initializing NVMe Controllers 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:49.097 00:33:49.097 real 0m0.128s 00:33:49.097 user 0m0.065s 00:33:49.097 sys 0m0.063s 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:49.097 ************************************ 00:33:49.097 END TEST nvmf_target_disconnect_tc1 00:33:49.097 ************************************ 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:49.097 ************************************ 00:33:49.097 START TEST nvmf_target_disconnect_tc2 00:33:49.097 ************************************ 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4098263 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4098263 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 4098263 ']' 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.097 10:25:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:49.097 [2024-11-06 10:25:52.572285] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:49.097 [2024-11-06 10:25:52.572339] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.358 [2024-11-06 10:25:52.678549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:49.358 [2024-11-06 10:25:52.731531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.358 [2024-11-06 10:25:52.731582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.358 [2024-11-06 10:25:52.731591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.358 [2024-11-06 10:25:52.731598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.358 [2024-11-06 10:25:52.731605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.358 [2024-11-06 10:25:52.734038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:49.358 [2024-11-06 10:25:52.734266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:49.358 [2024-11-06 10:25:52.734430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:49.358 [2024-11-06 10:25:52.734432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:49.931 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:49.931 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:33:49.931 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:49.931 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:49.931 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.931 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:49.931 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:49.931 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.931 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.193 Malloc0 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.193 [2024-11-06 10:25:53.459928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.193 [2024-11-06 10:25:53.488289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4098431 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:50.193 10:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:52.114 10:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4098263 00:33:52.114 10:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 [2024-11-06 10:25:55.516343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:52.114 [2024-11-06 10:25:55.516745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.114 [2024-11-06 10:25:55.516765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:52.114 qpair failed and we were unable to recover it. 00:33:52.114 [2024-11-06 10:25:55.517192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.114 [2024-11-06 10:25:55.517230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:52.114 qpair failed and we were unable to recover it. 00:33:52.114 [2024-11-06 10:25:55.517523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.114 [2024-11-06 10:25:55.517538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:52.114 qpair failed and we were unable to recover it. 00:33:52.114 [2024-11-06 10:25:55.517860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.114 [2024-11-06 10:25:55.517880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:52.114 qpair failed and we were unable to recover it. 00:33:52.114 [2024-11-06 10:25:55.518364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.114 [2024-11-06 10:25:55.518403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:52.114 qpair failed and we were unable to recover it. 00:33:52.114 [2024-11-06 10:25:55.518748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.114 [2024-11-06 10:25:55.518763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:52.114 qpair failed and we were unable to recover it. 00:33:52.114 [2024-11-06 10:25:55.519203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.114 [2024-11-06 10:25:55.519242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:52.114 qpair failed and we were unable to recover it. 00:33:52.114 [2024-11-06 10:25:55.519596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.114 [2024-11-06 10:25:55.519610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:52.114 qpair failed and we were unable to recover it. 00:33:52.114 [2024-11-06 10:25:55.520091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.114 [2024-11-06 10:25:55.520130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:52.114 qpair failed and we were unable to recover it. 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Write completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.114 Read completed with error (sct=0, sc=8) 00:33:52.114 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Write completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Write completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 Read completed with error (sct=0, sc=8) 00:33:52.115 starting I/O failed 00:33:52.115 [2024-11-06 10:25:55.520331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.115 [2024-11-06 10:25:55.520555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.520575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.521099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.521130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.521459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.521470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.521774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.521783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.522185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.522216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.522403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.522414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.522727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.522737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.523117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.523148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.523472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.523483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.523622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.523631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.523985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.523995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.524306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.524316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.524622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.524632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.524939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.524948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.525288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.525297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.525592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.525601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.525917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.525927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.526218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.526227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.526555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.526565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.526850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.526859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.527066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.527076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.527437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.527446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.527784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.527794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.527992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.528001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.528349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.528358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.528688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.528697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.528976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.528986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.529308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.529317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.529634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.529643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.529969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.529978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.530233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.530243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.530532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.530541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.530869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.530879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.531201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.531215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.531553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.531562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.531961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.531970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.532273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.532283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.532472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.532482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.532815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.532824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.533148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.533157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.533536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.533544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.115 qpair failed and we were unable to recover it. 00:33:52.115 [2024-11-06 10:25:55.533847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.115 [2024-11-06 10:25:55.533855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.534166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.534174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.534499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.534507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.534849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.534857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.535070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.535078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.535298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.535306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.535649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.535658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.535881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.535890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.536201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.536210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.536482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.536490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.536830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.536839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.537149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.537157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.537455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.537464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.537789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.537797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.538111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.538121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.538403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.538412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.538718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.538726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.538893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.538903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.539204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.539212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.539521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.539538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.539757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.539765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.539809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.539816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.540112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.540121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.540499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.540507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.540839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.540847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.541013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.541022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.541306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.541315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.541611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.541619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.541957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.541967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.542269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.542277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.542573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.542582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.542903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.542912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.543128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.543138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.543435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.543443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.543734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.543742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.544070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.544079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.544920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.544939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.545252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.545262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.545586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.545594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.545892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.545900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.546075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.546084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.546404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.546412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.546753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.546761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.547108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.547117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.547405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.547413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.548047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.548065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.548377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.548387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.548574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.116 [2024-11-06 10:25:55.548583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.116 qpair failed and we were unable to recover it. 00:33:52.116 [2024-11-06 10:25:55.548998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.549014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.549315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.549322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.549515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.549523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.549853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.549866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.550043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.550050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.550322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.550330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.550627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.550634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.550961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.550969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.551281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.551288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.551587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.551594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.551890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.551898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.552241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.552249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.552546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.552553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.552866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.552874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.553025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.553032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.553329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.553336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.553651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.553659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.553838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.553845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.554134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.554142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.554439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.554447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.554705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.554713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.554895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.554903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.555068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.555075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.555348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.555355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.555700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.555710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.555948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.555955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.556290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.556298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.556607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.556614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.556921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.556928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.557274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.557281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.557565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.557571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.557730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.557737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.558134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.558142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.558283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.558290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.558487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.558494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.558805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.558812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.559102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.559109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.559301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.559307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.559670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.559678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.560022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.560029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.560351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.560358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.560637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.560644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.560967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.560975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.561328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.561335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.561690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.561697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.562008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.562016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.562361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.562368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.117 [2024-11-06 10:25:55.562661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.117 [2024-11-06 10:25:55.562669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.117 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.562963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.562971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.563282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.563289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.563488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.563495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.563813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.563819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.564130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.564137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.564413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.564420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.564749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.564756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.564973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.564981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.565185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.565192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.565500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.565507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.565668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.565675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.565867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.565873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.566208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.566215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.566487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.566495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.566817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.566824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.567122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.567129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.567448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.567458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.567865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.567873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.568180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.568187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.568429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.568436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.568752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.568759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.568932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.568940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.569243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.569251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.569609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.569616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.569946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.569953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.570278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.570285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.570592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.570599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.570902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.570909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.571199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.571208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.571507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.571514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.571807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.571814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.572120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.572128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.572436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.572442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.572595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.572602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.572939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.572946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.573257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.573266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.573558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.573565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.573875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.573882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.574190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.574197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.574506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.574514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.118 [2024-11-06 10:25:55.574848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.118 [2024-11-06 10:25:55.574855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.118 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.575154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.575161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.575471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.575480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.575661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.575669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.575948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.575955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.576268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.576275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.576595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.576602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.576912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.576920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.577215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.577222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.577535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.577544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.577843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.577851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.578154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.578162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.578473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.578480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.578790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.578797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.579128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.579135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.579432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.579439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.579754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.579763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.580113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.580121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.580307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.580314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.580590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.580610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.580914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.580922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.581221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.581228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.581532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.581539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.581860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.581872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.582171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.582178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.582382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.582389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.582731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.582738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.583031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.583038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.583359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.583366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.583651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.583659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.583949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.583956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.584119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.584127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.584453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.584461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.584768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.584775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.585069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.585076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.585367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.585374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.585671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.585679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.585883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.585890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.586183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.586190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.586512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.586519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.586830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.586837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.587143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.587150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.587459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.587466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.587770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.587778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.588076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.588083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.588379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.588385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.588616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.588624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.588918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.588925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.119 [2024-11-06 10:25:55.589246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.119 [2024-11-06 10:25:55.589253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.119 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.589554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.589561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.589725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.589733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.590022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.590030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.590335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.590342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.590636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.590643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.590967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.590974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.591299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.591306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.591614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.591622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.591930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.591938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.592318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.592324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.592651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.592658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.592948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.592956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.593172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.593179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.593474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.593482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.593790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.593797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.593959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.593967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.594297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.594303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.594596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.594603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.594895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.594903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.595079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.595088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.595372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.595378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.595671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.595678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.595989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.595997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.596304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.596312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.596603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.596610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.596901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.596909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.597237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.597244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.597539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.597547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.597873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.597880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.598148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.598155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.598334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.598343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.598645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.598653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.598945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.598952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.599246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.599254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.599558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.599565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.599857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.599875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.600044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.600052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.600336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.600343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.600645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.600662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.600949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.600957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.601310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.601317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.601606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.601613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.601937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.601945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.602239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.602248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.602606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.602613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.602922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.602929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.603257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.603264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.120 qpair failed and we were unable to recover it. 00:33:52.120 [2024-11-06 10:25:55.603587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.120 [2024-11-06 10:25:55.603596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.603893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.603901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.604214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.604221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.604536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.604542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.604866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.604873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.605196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.605203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.605517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.605524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.605725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.605732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.605908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.605917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.606218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.606225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.606835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.606852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.607179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.607188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.607516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.607524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.607830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.607838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.608040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.608048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.608381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.608388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.608693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.608701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.121 [2024-11-06 10:25:55.608839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.121 [2024-11-06 10:25:55.608848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.121 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.609130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.609139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.609468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.609478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.609768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.609779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.610074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.610082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.610406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.610413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.610701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.610708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.611020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.611027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.611233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.611240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.611599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.611606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.611817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.611824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.612152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.612160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.612426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.612433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.612755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.612761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.613073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.613080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.613386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.613392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.613702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.398 [2024-11-06 10:25:55.613709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.398 qpair failed and we were unable to recover it. 00:33:52.398 [2024-11-06 10:25:55.614020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.614027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.614293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.614301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.614610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.614616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.614937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.614945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.615126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.615133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.615412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.615419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.615742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.615751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.616044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.616052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.616377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.616384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.616673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.616681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.616988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.616995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.617288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.617296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.617591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.617598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.617787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.617794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.618046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.618053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.618398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.618405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.618599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.618606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.618965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.618973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.619283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.619289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.619602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.619609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.619927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.619935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.620286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.620293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.620611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.620618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.620908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.620923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.621234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.621241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.621543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.621549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.621842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.621848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.622147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.622155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.622462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.622469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.622676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.622683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.622952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.622959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.623167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.623174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.623466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.623472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.623768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.623775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.624077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.624085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.624382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.624396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.624887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.399 [2024-11-06 10:25:55.624899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.399 qpair failed and we were unable to recover it. 00:33:52.399 [2024-11-06 10:25:55.625248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.625262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.625426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.625434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.625703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.625710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.626081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.626088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.626259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.626267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.626549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.626557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.626853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.626860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.627202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.627210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.627501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.627509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.627700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.627709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.628018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.628025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.628421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.628428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.628608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.628616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.628993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.629000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.629323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.629330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.629631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.629638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.630027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.630034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.630252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.630259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.630580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.630587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.630797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.630803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.631187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.631194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.631515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.631521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.631831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.631838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.632176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.632183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.632494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.632501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.632790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.632797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.633082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.633090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.633394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.633401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.633705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.633712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.634040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.634047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.634249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.634256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.634588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.634595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.634895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.634903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.635074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.635083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.635350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.635358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.635696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.635704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.636043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.636052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.636396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.400 [2024-11-06 10:25:55.636404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.400 qpair failed and we were unable to recover it. 00:33:52.400 [2024-11-06 10:25:55.636707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.636714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.637034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.637041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.637332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.637346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.637637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.637644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.637846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.637852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.638168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.638175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.638479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.638486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.638819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.638826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.639106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.639113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.639452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.639458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.639774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.639781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.640096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.640105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.640401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.640408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.640727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.640734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.641034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.641042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.641357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.641364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.641670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.641677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.641946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.641954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.642272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.642279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.642588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.642595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.642956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.642964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.643258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.643266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.643570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.643576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.643869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.643884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.644203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.644210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.644371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.644379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.644734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.644740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.645126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.645133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.645426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.645433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.645624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.645632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.645944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.645951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.646247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.646254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.646557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.646563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.646854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.646864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.647148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.647155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.647320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.647328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.647652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.647660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.647960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.647968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.401 [2024-11-06 10:25:55.648287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.401 [2024-11-06 10:25:55.648294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.401 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.648586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.648593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.648898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.648906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.649203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.649210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.649503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.649511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.649808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.649815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.650148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.650156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.650447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.650453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.650763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.650769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.651082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.651089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.651386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.651393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.651714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.651721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.652025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.652033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.652322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.652330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.652633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.652640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.652948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.652956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.653140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.653148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.653482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.653489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.653792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.653800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.654077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.654084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.654404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.654412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.654726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.654733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.655018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.655026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.655322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.655329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.655635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.655642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.655974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.655981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.656313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.656320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.656632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.656639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.656853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.656860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.657126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.657133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.657444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.657451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.657669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.657677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.658000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.658007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.658296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.658303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.658607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.658614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.658978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.658986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.659139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.659147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.659418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.659425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.402 [2024-11-06 10:25:55.659774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.402 [2024-11-06 10:25:55.659781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.402 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.660187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.660194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.660495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.660502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.660825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.660832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.660902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.660909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.661266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.661279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.661588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.661594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.661902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.661909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.662239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.662246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.662444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.662451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.662797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.662804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.663094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.663102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.663411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.663417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.663716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.663724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.664032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.664038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.664339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.664349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.664637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.664643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.664926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.664934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.665248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.665254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.665553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.665560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.665879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.665887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.666189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.666196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.666392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.666398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.666750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.666757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.667037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.667044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.667368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.667375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.667689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.667696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.668010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.668017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.668326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.668334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.668646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.668654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.668998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.669005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.669309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.669317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.403 qpair failed and we were unable to recover it. 00:33:52.403 [2024-11-06 10:25:55.669626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.403 [2024-11-06 10:25:55.669632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.669944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.669951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.670243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.670250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.670428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.670436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.670723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.670730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.671045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.671053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.671327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.671334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.671618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.671625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.671927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.671934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.672101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.672109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.672422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.672428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.672759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.672766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.673100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.673108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.673416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.673423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.673621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.673627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.673910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.673917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.674241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.674248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.674543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.674550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.674854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.674865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.675157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.675165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.675501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.675509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.675816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.675823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.675986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.675994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.676216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.676223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.676527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.676534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.676845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.676853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.677151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.677158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.677527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.677535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.677831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.677838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.678039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.678046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.678410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.678416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.678753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.678760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.679097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.679104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.679384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.679392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.679591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.679599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.679918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.679926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.680212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.680218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.680514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.680521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.404 qpair failed and we were unable to recover it. 00:33:52.404 [2024-11-06 10:25:55.680830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.404 [2024-11-06 10:25:55.680837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.681049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.681056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.681333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.681340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.681649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.681656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.681968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.681983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.682366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.682372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.682647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.682661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.682948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.682955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.683265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.683273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.683476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.683483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.683798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.683805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.684110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.684116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.684405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.684415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.684719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.684727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.685065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.685072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.685375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.685382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.685758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.685765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.686049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.686057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.686376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.686382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.686674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.686681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.687004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.687011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.687231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.687238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.687559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.687565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.687853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.687863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.688144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.688150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.688459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.688467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.688782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.688789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.689100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.689108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.689420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.689428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.689735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.689743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.690032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.690040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.690344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.690352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.690655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.690662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.690965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.690973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.691335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.691341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.691644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.691650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.691974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.691981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.692289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.692296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.405 [2024-11-06 10:25:55.692510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.405 [2024-11-06 10:25:55.692517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.405 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.692854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.692863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.693164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.693171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.693478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.693485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.693768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.693776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.694003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.694010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.694338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.694345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.694663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.694670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.694976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.694983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.695292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.695298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.695606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.695613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.695927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.695934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.696261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.696268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.696590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.696597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.696910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.696919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.697234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.697242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.697553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.697560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.697739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.697746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.697925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.697933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.698254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.698262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.698548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.698555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.698860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.698873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.699057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.699065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.699382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.699389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.699708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.699714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.700023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.700030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.700363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.700370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.700658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.700665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.700971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.700978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.701286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.701293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.701600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.701607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.701822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.701828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.702136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.702143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.702432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.702440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.702750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.702757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.703082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.703089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.703402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.703409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.703720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.703727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.704055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.406 [2024-11-06 10:25:55.704062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.406 qpair failed and we were unable to recover it. 00:33:52.406 [2024-11-06 10:25:55.704361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.704368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.704521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.704529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.704807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.704815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.705131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.705138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.705441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.705448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.705755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.705761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.705928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.705936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.706302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.706309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.706644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.706651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.706959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.706966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.707281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.707288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.707599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.707605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.707915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.707922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.708244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.708250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.708549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.708556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.708880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.708888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.709203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.709217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.709524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.709530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.709829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.709836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.710028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.710035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.710350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.710356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.710670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.710676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.710881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.710888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.711208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.711214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.711501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.711508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.711682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.711690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.712043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.712051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.712344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.712352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.712664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.712671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.712999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.713007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.713303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.713309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.713503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.713510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.713780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.407 [2024-11-06 10:25:55.713787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.407 qpair failed and we were unable to recover it. 00:33:52.407 [2024-11-06 10:25:55.714100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.714108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.714393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.714399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.714715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.714721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.714985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.714992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.715323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.715330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.715623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.715629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.715927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.715934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.716237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.716245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.716534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.716540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.716831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.716838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.716997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.717005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.717215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.717222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.717537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.717544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.717747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.717754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.718037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.718044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.718338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.718346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.718643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.718650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.718930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.718938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.719218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.719225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.719439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.719447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.719727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.719733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.719918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.719925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.720291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.720299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.720452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.720460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.720692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.720699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.720876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.720884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.721049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.721056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.721384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.721391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.721660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.721667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.722007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.722015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.722329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.722336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.722540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.722546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.722724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.722731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.723014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.723021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.723363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.723378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.723675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.723681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.724010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.408 [2024-11-06 10:25:55.724018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.408 qpair failed and we were unable to recover it. 00:33:52.408 [2024-11-06 10:25:55.724171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.724178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.724449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.724456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.724729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.724736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.724994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.725001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.725333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.725339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.725664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.725671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.725977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.725984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.726166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.726174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.726545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.726551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.726739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.726746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.727014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.727022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.727353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.727360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.727646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.727654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.727979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.727986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.728297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.728304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.728668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.728675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.728960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.728968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.729275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.729282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.729587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.729594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.729901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.729908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.730222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.730230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.730531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.730538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.730845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.730852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.731137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.731144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.731434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.731441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.731752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.731760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.732076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.732083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.732401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.732408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.732610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.732617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.732815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.732822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.733102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.733108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.733408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.733415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.733577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.733585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.733774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.733782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.734061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.734067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.734344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.734352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.734637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.734645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.734954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.409 [2024-11-06 10:25:55.734961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.409 qpair failed and we were unable to recover it. 00:33:52.409 [2024-11-06 10:25:55.735271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.735278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.735594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.735601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.735766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.735773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.736118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.736125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.736449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.736456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.736764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.736771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.737088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.737095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.737405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.737412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.737721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.737727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.738030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.738037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.738342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.738349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.738659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.738667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.738981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.738988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.739388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.739394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.739691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.739698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.740013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.740020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.740254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.740261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.740565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.740571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.740881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.740888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.741070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.741077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.741393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.741400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.741578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.741585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.741906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.741913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.742217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.742225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.742414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.742421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.742702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.742708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.743004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.743011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.743310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.743320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.743630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.743637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.743944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.743951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.744204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.744211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.744516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.744522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.744838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.744844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.745154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.745162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.745444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.745451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.745664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.745671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.745902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.745909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.746168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.746175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.410 qpair failed and we were unable to recover it. 00:33:52.410 [2024-11-06 10:25:55.746483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.410 [2024-11-06 10:25:55.746489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.746797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.746804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.747098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.747106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.747401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.747409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.747703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.747711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.747869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.747878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.748153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.748160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.748485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.748492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.748850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.748857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.749145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.749152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.749352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.749359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.749669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.749676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.749991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.749998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.750298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.750305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.750616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.750623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.750943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.750951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.751278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.751285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.751579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.751592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.751904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.751913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.752109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.752116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.752403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.752410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.752778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.752785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.752963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.752970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.753328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.753335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.753549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.753556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.753846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.753853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.754219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.754226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.754539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.754546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.754824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.754830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.755137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.755146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.755305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.755313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.755583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.755590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.755798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.755805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.756111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.756118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.756400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.756413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.756720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.756727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.757020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.757027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.757351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.757358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.411 [2024-11-06 10:25:55.757650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.411 [2024-11-06 10:25:55.757657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.411 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.757850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.757857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.758145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.758152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.758355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.758362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.758651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.758657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.758943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.758950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.759140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.759147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.759454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.759461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.759741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.759748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.760080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.760088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.760381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.760389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.760596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.760603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.760906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.760913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.761117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.761123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.761406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.761413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.761736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.761743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.762048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.762055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.762359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.762366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.762664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.762671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.762887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.762894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.763191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.763198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.763535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.763541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.763734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.763741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.764083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.764090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.764270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.764278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.764508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.764515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.764792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.764799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.765003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.765011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.765297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.765304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.765606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.765612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.765903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.765910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.766204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.766212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.766526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.766534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.766814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.412 [2024-11-06 10:25:55.766822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.412 qpair failed and we were unable to recover it. 00:33:52.412 [2024-11-06 10:25:55.767222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.767229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.767521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.767536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.767847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.767854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.768170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.768177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.768389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.768396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.768618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.768626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.768941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.768948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.769293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.769300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.769683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.769690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.769869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.769876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.770193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.770200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.770356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.770363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.770649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.770657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.770873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.770881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.771166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.771173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.771508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.771514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.771712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.771719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.771931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.771938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.772251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.772259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.772579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.772587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.772887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.772894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.773212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.773220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.773420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.773427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.773700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.773707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.773920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.773928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.774134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.774141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.774421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.774428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.774700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.774708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.775000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.775007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.775310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.775317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.775632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.775639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.775955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.775963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.776295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.776302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.776617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.776624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.776941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.776949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.777247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.777255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.777547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.777554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.777869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.777878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.413 qpair failed and we were unable to recover it. 00:33:52.413 [2024-11-06 10:25:55.778251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.413 [2024-11-06 10:25:55.778259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.778560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.778567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.778871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.778879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.779188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.779196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.779392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.779399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.779646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.779653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.780013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.780020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.780275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.780282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.780603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.780609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.780914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.780921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.781211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.781217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.781499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.781513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.781901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.781908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.782226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.782233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.782443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.782449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.782641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.782648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.782872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.782880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.783155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.783162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.783449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.783457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.783741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.783749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.783930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.783938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.784221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.784228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.784521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.784528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.784705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.784713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.785036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.785043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.785382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.785389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.785450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.785457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.785745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.785752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.786084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.786092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.786298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.786305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.786540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.786547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.786858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.786868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.787192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.787199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.414 qpair failed and we were unable to recover it. 00:33:52.414 [2024-11-06 10:25:55.787512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.414 [2024-11-06 10:25:55.787519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.787822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.787829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.788114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.788122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.788332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.788338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.788602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.788609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.788929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.788936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.789332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.789341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.789650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.789657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.789835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.789843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.790162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.790170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.790512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.790519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.790827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.790835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.791141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.791148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.791449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.791456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.791760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.791768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.792080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.792087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.792286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.792293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.792518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.792525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.792842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.792848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.793156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.793163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.793338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.793346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.793666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.793672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.793840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.793848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.794227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.794235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.794410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.415 [2024-11-06 10:25:55.794417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.415 qpair failed and we were unable to recover it. 00:33:52.415 [2024-11-06 10:25:55.794697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.794705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.795001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.795008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.795315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.795322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.795682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.795688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.795897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.795905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.796197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.796203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.796417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.796425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.796706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.796713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.797041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.797048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.797356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.797364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.797656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.797663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.797930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.797936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.798242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.798249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.798557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.798565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.798859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.798871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.799205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.799211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.799417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.799424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.799726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.799733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.800030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.800037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.800333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.800341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.800657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.800664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.800733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.800743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.801025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.801033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.801305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.801312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.801612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.801620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.801936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.416 [2024-11-06 10:25:55.801943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.416 qpair failed and we were unable to recover it. 00:33:52.416 [2024-11-06 10:25:55.802267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.802274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.802575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.802582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.802892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.802900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.803292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.803299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.803488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.803495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.803800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.803808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.804007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.804014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.804359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.804366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.804675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.804681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.804995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.805002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.805322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.805329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.805667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.805674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.806053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.806060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.806347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.806355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.806656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.806663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.806984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.806992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.807321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.807328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.807493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.807500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.807799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.807806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.808125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.808132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.417 [2024-11-06 10:25:55.808320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.417 [2024-11-06 10:25:55.808327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.417 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.808678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.808684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.809006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.809014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.809211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.809218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.809494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.809501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.809801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.809807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.810216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.810223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.810494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.810501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.810670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.810677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.810985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.810992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.811321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.811329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.811642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.811649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.811730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.811736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.811936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.811943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.812262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.812269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.812598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.812607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.812820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.812827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.813026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.813033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.813365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.813372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.813654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.813661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.813871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.813879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.814160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.814167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.814542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.814549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.814873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.418 [2024-11-06 10:25:55.814881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.418 qpair failed and we were unable to recover it. 00:33:52.418 [2024-11-06 10:25:55.815198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.815205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.815476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.815483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.815792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.815798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.816103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.816119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.816432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.816440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.816552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.816559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.816854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.816860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.817245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.817252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.817605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.817612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.817921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.817928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.818236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.818243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.818545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.818552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.818718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.818726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.819043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.819050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.819379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.819385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.819687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.819694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.820010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.820018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.820237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.820243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.820557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.820564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.820769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.820776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.820974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.820981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.821102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.821109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.821285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.821293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.821608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.821616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.821912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.821920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.822229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.822236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.419 qpair failed and we were unable to recover it. 00:33:52.419 [2024-11-06 10:25:55.822546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.419 [2024-11-06 10:25:55.822553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.822880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.822887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.823153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.823160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.823462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.823469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.823782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.823789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.824114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.824121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.824414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.824421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.824744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.824751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.825040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.825047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.825373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.825380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.825682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.825689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.826003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.826011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.826407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.826414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.826585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.826592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.826972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.826980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.827291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.827297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.827629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.827636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.827948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.827955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.828145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.828152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.828270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.828277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.828599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.828605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.828907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.828915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.829091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.829098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.829294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.829300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.829573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.829580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.829796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.829804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.830103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.830110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.420 [2024-11-06 10:25:55.830402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.420 [2024-11-06 10:25:55.830409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.420 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.830699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.830707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.830899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.830907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.831322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.831328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.831494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.831500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.831777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.831787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.832115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.832122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.832422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.832429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.832635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.832642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.832968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.832975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.833308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.833315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.833641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.833648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.833939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.833946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.834279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.834286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.834710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.834717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.834991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.834999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.835206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.835213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.835527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.835534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.835696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.835704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.835880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.835887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.836212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.836220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.836522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.836530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.836847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.836855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.837210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.837217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.837529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.837536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.837846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.837853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.838166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.838173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.838364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.838371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.838547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.838554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.838785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.838793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.839104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.839111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.839459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.839467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.839792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.839799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.839970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.839978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.840293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.840302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.421 qpair failed and we were unable to recover it. 00:33:52.421 [2024-11-06 10:25:55.840491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.421 [2024-11-06 10:25:55.840500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.840825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.840832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.841149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.841157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.841470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.841477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.841667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.841675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.841958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.841965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.842180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.842187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.842496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.842503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.842799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.842806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.843103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.843110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.843440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.843448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.843770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.843776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.844010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.844017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.844295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.844302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.844625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.844633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.844943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.844950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.845165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.845172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.845500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.845507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.845684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.845691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.845971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.845979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.846290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.846296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.846599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.846607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.846922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.846930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.847256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.847263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.847565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.847572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.847912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.847919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.848008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.848015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.848305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.848312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.848685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.848692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.849052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.849059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.849359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.849367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.849796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.849803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.850084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.850091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.850432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.850439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.850767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.850774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.850932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.850940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.851312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.851319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.851617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.422 [2024-11-06 10:25:55.851624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.422 qpair failed and we were unable to recover it. 00:33:52.422 [2024-11-06 10:25:55.851953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.851961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.852272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.852279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.852495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.852502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.852797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.852804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.853148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.853155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.853488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.853495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.853674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.853681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.854005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.854013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.854321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.854328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.854608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.854615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.854912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.854919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.855221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.855228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.855425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.855433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.855704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.855712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.855907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.855914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.856271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.856278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.856483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.856489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.856666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.856673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.856973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.856981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.857147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.857154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.857430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.857438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.857637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.857643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.857896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.857904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.858327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.858333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.858662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.858670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.858992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.858999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.859221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.859228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.859437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.859444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.859740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.859748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.860151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.860159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.860361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.860367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.860721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.860729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.861047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.861054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.423 [2024-11-06 10:25:55.861431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.423 [2024-11-06 10:25:55.861438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.423 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.861629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.861636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.861959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.861967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.862311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.862318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.862619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.862627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.862975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.862982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.863308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.863315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.863507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.863515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.863704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.863711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.864031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.864038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.864342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.864349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.864735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.864741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.865093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.865100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.865386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.865393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.865695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.865702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.865998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.866006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.866190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.866197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.866485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.866492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.866804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.866811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.867160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.867168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.867321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.867328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.867562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.867569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.867910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.867918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.868281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.868288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.868594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.868600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.868893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.868900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.869112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.869119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.869282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.869288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.869459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.869466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.869785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.869792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.870149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.870162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.870346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.870353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.870673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.870680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.871006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.871013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.871301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.871308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.871604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.871611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.871902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.871909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.872225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.872231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.424 [2024-11-06 10:25:55.872516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.424 [2024-11-06 10:25:55.872524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.424 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.872809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.872816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.873085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.873093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.873406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.873413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.873709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.873717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.874023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.874030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.874342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.874348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.874658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.874666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.874994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.875001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.875303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.875309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.875619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.875626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.875910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.875917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.876124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.876131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.876443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.876450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.876761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.876768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.877059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.877066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.877454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.877461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.877651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.877658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.877958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.877965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.878331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.878338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.878645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.878652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.879011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.879021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.879323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.879330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.879611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.879618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.879914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.879921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.880223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.880229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.880548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.880555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.880743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.880750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.881123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.881130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.881416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.881423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.881747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.881754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.882084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.882092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.882399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.882405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.425 [2024-11-06 10:25:55.882701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.425 [2024-11-06 10:25:55.882708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.425 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.883027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.883036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.883322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.883330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.883639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.883646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.883938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.883946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.884028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.884035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.884279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.884286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.884587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.884594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.884903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.884911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.885233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.885240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.885547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.885554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.885868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.885876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.886188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.886195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.886484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.886491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.886782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.886789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.887066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.887073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.887398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.887404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.887690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.887698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.887996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.888003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.888284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.888290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.888574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.888581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.888871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.888878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.889186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.889194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.889434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.889443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.889791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.889799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.890100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.890108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.890382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.890388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.890756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.704 [2024-11-06 10:25:55.890762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.704 qpair failed and we were unable to recover it. 00:33:52.704 [2024-11-06 10:25:55.891045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.891053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.891357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.891365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.891562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.891570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.891876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.891884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.892183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.892190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.892477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.892486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.892878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.892885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.893183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.893190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.893512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.893519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.893821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.893827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.894146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.894154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.894467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.894475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.894685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.894692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.894905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.894912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.895260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.895266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.895572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.895580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.895876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.895884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.896064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.896072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.896349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.896356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.896518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.896524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.896938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.896946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.897237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.897245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.897551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.897558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.897843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.897851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.898047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.898055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.898376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.898383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.898578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.898585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.898960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.898967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.899273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.899281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.899574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.899581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.899891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.899899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.900207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.900213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.900525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.900531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.900776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.900784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.901074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.901081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.901371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.901378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.901696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.901703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.902022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.705 [2024-11-06 10:25:55.902030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.705 qpair failed and we were unable to recover it. 00:33:52.705 [2024-11-06 10:25:55.902348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.902355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.902704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.902712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.903027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.903036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.903222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.903229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.903428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.903436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.903731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.903738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.904011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.904019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.904348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.904355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.904641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.904648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.904973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.904981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.905275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.905282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.905617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.905624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.905928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.905935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.906241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.906248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.906572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.906579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.906869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.906878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.907196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.907204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.907514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.907521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.907821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.907828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.907993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.908001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.908278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.908286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.908614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.908623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.908908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.908917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.909217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.909225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.909535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.909542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.909866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.909874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.910259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.910265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.910580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.910587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.910765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.910772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.911074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.911081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.911367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.911376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.911528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.911535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.911852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.911859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.912066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.912073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.912354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.912362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.912674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.912681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.912996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.913003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.913332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.913339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.706 qpair failed and we were unable to recover it. 00:33:52.706 [2024-11-06 10:25:55.913532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.706 [2024-11-06 10:25:55.913539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.913832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.913839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.914056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.914064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.914391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.914398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.914701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.914710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.914999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.915006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.915273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.915279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.915583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.915590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.915786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.915793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.916202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.916210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.916544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.916551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.916776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.916783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.917106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.917113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.917442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.917450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.917757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.917765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.918075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.918085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.918388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.918396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.918694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.918702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.918868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.918876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.919192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.919199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.919373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.919382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.919728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.919735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.920031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.920039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.920337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.920344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.920628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.920636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.920926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.920933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.921300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.921307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.921618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.921625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.921869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.921877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.922185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.922192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.922502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.922509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.922821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.922827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.923014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.923021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.923376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.923383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.923705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.923712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.924027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.924035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.924358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.924365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.924748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.924755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.707 [2024-11-06 10:25:55.925059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.707 [2024-11-06 10:25:55.925066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.707 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.925366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.925374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.925544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.925552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.925843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.925850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.926159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.926166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.926476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.926482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.926805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.926813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.927206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.927214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.927524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.927531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.927885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.927892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.928247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.928254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.928587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.928593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.928765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.928773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.929080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.929087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.929400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.929407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.929729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.929736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.930027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.930034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.930358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.930365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.930637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.930644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.930860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.930869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.931187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.931193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.931505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.931512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.931819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.931826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.932116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.932124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.932434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.932441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.932754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.932761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.933045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.933052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.933387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.933394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.933708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.933716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.934012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.934019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.934328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.934335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.934660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.934667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.708 [2024-11-06 10:25:55.934864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.708 [2024-11-06 10:25:55.934871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.708 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.935045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.935052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.935248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.935255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.935531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.935537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.935760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.935767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.936065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.936072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.936403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.936410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.936704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.936711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.937024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.937031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.937347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.937354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.937514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.937521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.937840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.937847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.938137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.938153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.938460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.938467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.938754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.938763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.939077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.939084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.939366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.939374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.939675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.939682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.939902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.939909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.940245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.940251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.940578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.940586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.940893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.940901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.941216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.941223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.941557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.941564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.941878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.941885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.942188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.942195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.942503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.942509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.942713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.942720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.942994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.943002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.943170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.943178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.943564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.943572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.943869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.943876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.944163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.944170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.944493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.944499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.944807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.944815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.944982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.944991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.945277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.945284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.945529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.945536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.709 [2024-11-06 10:25:55.945866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.709 [2024-11-06 10:25:55.945873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.709 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.946186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.946193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.946511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.946518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.946800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.946808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.947097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.947104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.947399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.947406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.947595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.947603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.947931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.947938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.948258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.948265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.948569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.948576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.948957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.948965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.949248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.949255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.949569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.949576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.949870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.949877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.950167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.950174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.950475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.950482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.950659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.950668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.950948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.950955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.951157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.951164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.951465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.951471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.951761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.951769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.952157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.952163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.952442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.952450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.952762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.952769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.952976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.952983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.953277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.953283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.953579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.953586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.953888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.953895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.954202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.954210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.954497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.954504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.954713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.954720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.955045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.955052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.955362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.955369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.955672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.955680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.955990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.955998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.956313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.956320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.956632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.956638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.956931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.956945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.957269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.710 [2024-11-06 10:25:55.957276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.710 qpair failed and we were unable to recover it. 00:33:52.710 [2024-11-06 10:25:55.957576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.957582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.957911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.957918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.958245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.958252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.958445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.958452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.958719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.958726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.959028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.959035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.959332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.959340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.959649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.959656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.959974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.959981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.960152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.960159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.960195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.960202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.960482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.960489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.960807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.960814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.961091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.961098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.961412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.961418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.961701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.961709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.962041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.962048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.962333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.962341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.962644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.962650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.962942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.962950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.963248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.963255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.963460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.963467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.963869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.963878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.964186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.964193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.964503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.964510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.964835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.964841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.965244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.965252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.965580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.965587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.965820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.965827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.966142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.966149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.966447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.966454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.966783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.966790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.967107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.967114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.967422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.967429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.967736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.967743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.968086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.968093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.968377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.968385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.968692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.968699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.711 qpair failed and we were unable to recover it. 00:33:52.711 [2024-11-06 10:25:55.968990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.711 [2024-11-06 10:25:55.968997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.969308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.969314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.969623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.969630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.969930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.969938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.970250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.970257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.970553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.970560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.970872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.970880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.971188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.971195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.971505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.971512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.971836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.971843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.972028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.972035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.972411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.972418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.972606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.972613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.972907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.972914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.973223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.973230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.973558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.973564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.973846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.973853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.974148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.974155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.974465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.974472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.974704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.974711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.975028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.975035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.975347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.975354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.975555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.975562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.975860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.975870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.976193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.976201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.976525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.976533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.976878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.976885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.977200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.977207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.977514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.977520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.977811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.977818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.978142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.978148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.978460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.978467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.978780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.978786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.978993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.979000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.979301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.979308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.979516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.979523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.979831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.979839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.980022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.980029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.712 [2024-11-06 10:25:55.980315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.712 [2024-11-06 10:25:55.980322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.712 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.980495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.980502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.980837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.980844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.981138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.981145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.981459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.981466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.981755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.981762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.981979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.981986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.982316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.982322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.982632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.982640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.982950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.982958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.983212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.983218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.983516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.983522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.983886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.983893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.984173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.984180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.984488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.984494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.984877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.984885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.985220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.985227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.985537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.985544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.985853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.985859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.986030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.986038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.986348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.986354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.986572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.986579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.986877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.986884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.987161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.987168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.987484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.987490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.987805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.987812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.988131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.988138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.988425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.988433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.988594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.988602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.988877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.988884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.989226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.989232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.989534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.989541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.989860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.989873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.990155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.990162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.713 qpair failed and we were unable to recover it. 00:33:52.713 [2024-11-06 10:25:55.990479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.713 [2024-11-06 10:25:55.990486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.990803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.990810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.991124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.991131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.991409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.991422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.991727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.991733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.991920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.991928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.992247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.992254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.992609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.992616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.992870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.992877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.993241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.993249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.993553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.993560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.993871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.993879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.994170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.994177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.994469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.994476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.994636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.994645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.994961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.994969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.995268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.995276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.995600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.995606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.995901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.995909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.996260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.996267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.996460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.996467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.996767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.996774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.997069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.997076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.997399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.997405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.997690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.997698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.997994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.998001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.998176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.998183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.998363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.998370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.998647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.998653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.998949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.998956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.999170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.999176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.999495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.999502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:55.999816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:55.999822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:56.000128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:56.000135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:56.000446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:56.000453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:56.000767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:56.000773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:56.000960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:56.000968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:56.001369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:56.001375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.714 [2024-11-06 10:25:56.001663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.714 [2024-11-06 10:25:56.001676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.714 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.001981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.001988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.002320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.002327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.002670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.002677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.002994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.003002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.003320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.003327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.003658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.003672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.003751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.003758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.004038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.004045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.004371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.004378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.004710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.004716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.005039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.005046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.005257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.005263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.005589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.005596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.005923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.005930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.006261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.006268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.006591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.006599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.006906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.006914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.007128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.007135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.007330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.007337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.007649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.007656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.007981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.007988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.008278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.008285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.008489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.008495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.008798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.008805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.009128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.009135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.009445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.009452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.009770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.009777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.010087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.010094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.010404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.010411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.010624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.010632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.010948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.010955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.011280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.011287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.011666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.011674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.011868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.011875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.012152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.012159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.012459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.012465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.012657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.012664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.012929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.715 [2024-11-06 10:25:56.012937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.715 qpair failed and we were unable to recover it. 00:33:52.715 [2024-11-06 10:25:56.013250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.013257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.013444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.013451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.013876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.013883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.014188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.014195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.014506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.014513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.014809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.014816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.014983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.014991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.015267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.015275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.015552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.015560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.015769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.015777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.016093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.016100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.016347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.016355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.016668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.016675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.016965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.016972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.017298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.017305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.017615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.017622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.017774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.017781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.018147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.018156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.018462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.018470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.018678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.018686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.018953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.018960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.019234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.019241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.019575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.019582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.019903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.019911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.020131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.020138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.020447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.020454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.020767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.020774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.020937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.020945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.021259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.021266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.021597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.021604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.021816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.021822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.022163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.022170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.022363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.022370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.022738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.022745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.022945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.022952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.023358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.023366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.023669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.023676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.023907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.716 [2024-11-06 10:25:56.023914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.716 qpair failed and we were unable to recover it. 00:33:52.716 [2024-11-06 10:25:56.024259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.024267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.024555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.024562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.024779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.024786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.024853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.024860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.025180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.025187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.025508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.025515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.025834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.025841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.026133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.026141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.026463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.026469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.026681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.026688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.026976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.026983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.027300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.027307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.027634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.027641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.027959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.027966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.028291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.028297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.028611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.028618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.028908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.028916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.029221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.029228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.029517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.029525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.029877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.029886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.030181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.030188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.030515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.030521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.030873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.030881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.031186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.031193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.031484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.031492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.031797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.031804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.032088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.032096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.032309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.032316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.032590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.032597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.032914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.032921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.033263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.033269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.033578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.033585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.033745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.033752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.034122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.717 [2024-11-06 10:25:56.034130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.717 qpair failed and we were unable to recover it. 00:33:52.717 [2024-11-06 10:25:56.034329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.034336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.034664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.034671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.034999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.035006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.035304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.035312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.035627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.035634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.035865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.035872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.036085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.036092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.036290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.036298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.036625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.036632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.036926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.036933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.037267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.037274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.037555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.037562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.037974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.037981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.038421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.038427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.038732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.038739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.039078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.039085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.039386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.039393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.039706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.039713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.040029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.040036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.040389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.040396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.040688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.040695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.040885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.040892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.041197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.041204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.041540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.041547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.041867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.041874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.042163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.042171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.042485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.042492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.042649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.042657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.043083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.043090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.043403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.043410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.043709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.043715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.044032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.044040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.044261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.044268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.044531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.044538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.044863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.044872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.045063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.045071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.045329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.045337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.718 qpair failed and we were unable to recover it. 00:33:52.718 [2024-11-06 10:25:56.045550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.718 [2024-11-06 10:25:56.045557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.045861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.045874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.046204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.046210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.046496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.046504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.046815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.046822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.047098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.047106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.047428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.047435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.047740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.047748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.048039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.048046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.048359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.048366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.048669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.048675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.049074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.049081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.049270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.049277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.049451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.049459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.049827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.049834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.050148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.050156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.050465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.050472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.050659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.050665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.051020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.051027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.051347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.051354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.051681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.051688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.051979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.051986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.052353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.052360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.052671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.052679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.052974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.052982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.053314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.053321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.053666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.053673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.054015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.054022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.054321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.054330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.054662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.054669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.054977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.054984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.055301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.055308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.055622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.055629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.055836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.055843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.055936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.055943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.056122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.056130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.056454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.056461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.056791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.056798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.719 [2024-11-06 10:25:56.057133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.719 [2024-11-06 10:25:56.057140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.719 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.057432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.057439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.057799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.057806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.058163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.058170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.058521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.058528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.058695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.058703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.058978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.058985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.059336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.059343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.059708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.059716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.059917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.059924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.060206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.060213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.060453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.060460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.060788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.060794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.060947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.060955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.061316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.061322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.061635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.061642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.061971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.061978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.062270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.062277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.062465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.062473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.062784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.062790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.062985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.062993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.063279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.063286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.063597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.063605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.063959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.063966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.064148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.064155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.064441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.064447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.064761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.064768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.065027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.065035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.065372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.065379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.065686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.065693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.066012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.066021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.066379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.066385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.066741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.066748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.066812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.066819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.067169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.067176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.067467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.067474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.067781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.067787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.068108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.068115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.068469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.720 [2024-11-06 10:25:56.068476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.720 qpair failed and we were unable to recover it. 00:33:52.720 [2024-11-06 10:25:56.068789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.068797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.069130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.069137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.069435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.069443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.069755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.069763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.070097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.070105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.070257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.070265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.070547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.070555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.070727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.070734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.071031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.071038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.071366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.071374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.071692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.071699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.071998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.072005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.072215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.072222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.072568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.072575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.072978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.072985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.073289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.073297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.073608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.073615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.073881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.073888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.074215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.074222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.074456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.074463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.074777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.074784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.075087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.075094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.075425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.075432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.075724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.075732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.076032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.076039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.076355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.076362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.076689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.076696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.076993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.077000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.077214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.077221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.077529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.077536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.077704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.077712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.078002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.078011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.078310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.078317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.078683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.078690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.078989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.078997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.079306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.079313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.079632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.079638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.079957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.721 [2024-11-06 10:25:56.079964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.721 qpair failed and we were unable to recover it. 00:33:52.721 [2024-11-06 10:25:56.080257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.080270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.080680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.080686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.080981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.080989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.081301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.081309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.081507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.081514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.081693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.081700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.082052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.082060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.082379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.082387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.082476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.082483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.082808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.082815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.083115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.083131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.083463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.083470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.083867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.083874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.084185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.084192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.084358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.084366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.084681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.084688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.085041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.085048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.085348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.085355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.085527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.085536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.085850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.085857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.086040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.086048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.086391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.086398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.086601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.086608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.086934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.086942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.087264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.087271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.087599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.087606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.087911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.087918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.088129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.088136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.088484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.088491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.088800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.088807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.089113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.089120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.089371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.089377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.089730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.089737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.090046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.722 [2024-11-06 10:25:56.090055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.722 qpair failed and we were unable to recover it. 00:33:52.722 [2024-11-06 10:25:56.090358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.090365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.090657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.090664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.090978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.090986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.091317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.091325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.091503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.091509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.091807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.091814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.092133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.092141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.092455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.092463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.092757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.092764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.092927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.092933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.093213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.093220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.093535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.093541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.093830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.093837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.094004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.094011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.094319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.094326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.094649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.094657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.094975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.094983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.095310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.095318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.095471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.095478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.095749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.095756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.096062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.096069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.096470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.096478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.096646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.096653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.096954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.096961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.097173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.097180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.097477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.097484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.097794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.097801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.098098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.098105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.098434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.098441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.098757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.098763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.099156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.099163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.099444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.099451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.099784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.099791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.100141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.100148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.100457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.100464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.100657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.100664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.100952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.100959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.101293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.101300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.723 qpair failed and we were unable to recover it. 00:33:52.723 [2024-11-06 10:25:56.101645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.723 [2024-11-06 10:25:56.101651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.101948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.101963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.102273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.102280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.102574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.102581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.102893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.102900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.103226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.103233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.103533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.103539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.103687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.103694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.103860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.103869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.104186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.104192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.104398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.104404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.104737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.104744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.105010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.105018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.105327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.105333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.105501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.105508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.105829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.105837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.106161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.106168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.106476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.106483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.106775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.106782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.107103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.107111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.107417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.107424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.107743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.107750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.107944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.107952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.108243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.108250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.108572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.108579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.108749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.108757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.109055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.109063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.109280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.109287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.109423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.109431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.109690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.109697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.110015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.110022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.110309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.110316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.110633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.110639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.110966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.110973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.111277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.111284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.111659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.111666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.111986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.111993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.112164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.112170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.724 [2024-11-06 10:25:56.112446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.724 [2024-11-06 10:25:56.112453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.724 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.112622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.112629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.112946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.112953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.113349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.113358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.113664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.113671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.113960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.113967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.114263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.114270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.114576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.114583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.114892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.114900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.115195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.115202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.115511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.115518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.115833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.115840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.116122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.116129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.116258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.116264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.116573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.116581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.116893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.116900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.117218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.117225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.117385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.117392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.117656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.117664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.117971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.117978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.118286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.118293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.118593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.118601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.118800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.118808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.119088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.119096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.119414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.119422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.119747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.119755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.120058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.120066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.120351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.120358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.120664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.120671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.120988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.120995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.121289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.121296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.121599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.121606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.121732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.121739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.122058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.122065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.122494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.122501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.122716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.122723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.123039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.123046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.123378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.123386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.123540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.725 [2024-11-06 10:25:56.123548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.725 qpair failed and we were unable to recover it. 00:33:52.725 [2024-11-06 10:25:56.123828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.123834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.124064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.124071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.124385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.124392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.124701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.124707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.124880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.124887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.125098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.125105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.125406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.125413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.125721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.125728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.125938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.125945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.126291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.126298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.126474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.126482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.126782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.126788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.127072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.127079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.127370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.127377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.127663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.127671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.127826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.127833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.127880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.127887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.128062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.128069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.128451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.128458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.128673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.128680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.128902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.128910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.129239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.129245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.129540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.129547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.129790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.129797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.130110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.130117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.130401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.130408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.130712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.130720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.131088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.131095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.131377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.131385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.131685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.131691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.131984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.131999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.132181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.132190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.132389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.132396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.132759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.726 [2024-11-06 10:25:56.132765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.726 qpair failed and we were unable to recover it. 00:33:52.726 [2024-11-06 10:25:56.133081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.133088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.133399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.133406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.133731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.133738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.133949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.133956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.134219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.134226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.134548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.134555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.134904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.134911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.135239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.135246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.135438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.135444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.135832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.135839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.136065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.136072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.136346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.136353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.136667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.136673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.136965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.136972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.137271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.137286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.137571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.137578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.137745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.137753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.138111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.138118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.138284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.138291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.138601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.138608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.138924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.138931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.139210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.139217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.139536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.139544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.139839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.139847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.140026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.140034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.140359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.140366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.140690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.140696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.140981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.140988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.141276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.141283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.141459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.141467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.141762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.141770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.142062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.142070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.142379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.142386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.142701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.142708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.143036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.143043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.143336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.143343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.143653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.143659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.143984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.143993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.727 [2024-11-06 10:25:56.144355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.727 [2024-11-06 10:25:56.144362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.727 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.144565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.144572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.144787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.144794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.144964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.144971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.145156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.145162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.145450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.145457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.145783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.145790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.146094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.146101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.146396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.146403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.146708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.146715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.147025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.147032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.147321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.147328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.147651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.147657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.147952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.147960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.148285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.148291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.148572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.148585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.148864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.148871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.149176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.149183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.149492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.149499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.149783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.149790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.149975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.149983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.150284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.150290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.150565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.150571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.150887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.150894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.151231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.151238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.151542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.151549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.151857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.151866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.152053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.152060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.152433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.152439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.152750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.152757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.153077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.153084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.153291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.153298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.153627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.153634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.153972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.153980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.154306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.154312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.154624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.154631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.154947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.154954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.155243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.728 [2024-11-06 10:25:56.155250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.728 qpair failed and we were unable to recover it. 00:33:52.728 [2024-11-06 10:25:56.155569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.155576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.155853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.155865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.156162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.156169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.156466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.156474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.156833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.156839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.157129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.157137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.157433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.157440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.157722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.157730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.157881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.157888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.158151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.158158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.158496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.158503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.158830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.158838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.159146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.159154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.159472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.159480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.159785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.159791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.160097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.160105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.160382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.160389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.160692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.160700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.161002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.161009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.161190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.161197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.161396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.161403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.161699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.161706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.162046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.162054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.162377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.162383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.162573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.162579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.162934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.162941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.163260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.163267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.163565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.163572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.163884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.163891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.164178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.164185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.164495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.164502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.164810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.164817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.165209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.165216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.165500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.165508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.165865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.165872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.166221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.166228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.166418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.166424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.166698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.166704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.729 [2024-11-06 10:25:56.166877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.729 [2024-11-06 10:25:56.166884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.729 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.167157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.167164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.167488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.167495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.167801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.167809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.168121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.168128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.168423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.168430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.168739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.168746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.169031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.169038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.169348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.169356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.169646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.169653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.169968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.169976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.170278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.170285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.170581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.170588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.170890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.170897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.171204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.171211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.171519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.171526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.171717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.171724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.172017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.172024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.172314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.172328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.172507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.172515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.172871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.172878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.173198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.173205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.173512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.173519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.173818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.173825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.174158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.174166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.174471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.174478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.174776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.174783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.175146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.175153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.175359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.175366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.175752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.175760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.176090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.176098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.176418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.176425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.176732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.176740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.177026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.177033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.177235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.177242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.177404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.177411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.177711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.177718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.178018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.178025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.178359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.730 [2024-11-06 10:25:56.178365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.730 qpair failed and we were unable to recover it. 00:33:52.730 [2024-11-06 10:25:56.178656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.178664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.178971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.178978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.179288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.179294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.179606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.179614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.179793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.179803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.180110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.180117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.180428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.180435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.180740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.180748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.181149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.181156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.181396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.181403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.181732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.181738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.182030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.182037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.182362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.182369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.182679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.182685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.182982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.182989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.183371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.183377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.183643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.183650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.183979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.183986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.184275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.184282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.184592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.184599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.184911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.184918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.185230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.185238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.185531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.185539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.185848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.185856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.186230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.186237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.186401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.186408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:52.731 [2024-11-06 10:25:56.186789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.731 [2024-11-06 10:25:56.186796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:52.731 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.187096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.187106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.187449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.187456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.187739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.187747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.187866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.187873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.188198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.188205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.188504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.188511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.188861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.188876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.189085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.189092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.189353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.189360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.189571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.189585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.189909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.189916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.190209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.190217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.190495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.190502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.190793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.190807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.190989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.190997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.191288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.191295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.191577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.191584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.191888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.191897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.192205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.192213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.192539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.192545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.192827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.192835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.193152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.193159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.193448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.193456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.012 qpair failed and we were unable to recover it. 00:33:53.012 [2024-11-06 10:25:56.193763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.012 [2024-11-06 10:25:56.193769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.194079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.194087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.194406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.194413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.194737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.194744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.195042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.195049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.195357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.195364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.195675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.195682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.195887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.195894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.196158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.196166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.196485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.196493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.196806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.196814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.197130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.197138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.197430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.197437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.197612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.197620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.197916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.197923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.198243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.198250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.198546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.198553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.198872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.198879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.199180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.199186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.199493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.199500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.199809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.199815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.200118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.200126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.200324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.200331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.200495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.200502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.200783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.200790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.201072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.201079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.201383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.201391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.201689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.201696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.202063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.202069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.202245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.202252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.202464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.202471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.202812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.202818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.203029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.203036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.203279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.203286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.203599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.203608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.013 qpair failed and we were unable to recover it. 00:33:53.013 [2024-11-06 10:25:56.203925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.013 [2024-11-06 10:25:56.203932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.204223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.204231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.204504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.204510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.204802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.204816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.205126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.205133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.205430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.205437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.205761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.205768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.206055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.206062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.206402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.206409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.206739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.206746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.207028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.207035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.207330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.207338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.207716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.207722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.208015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.208029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.208369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.208375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.208687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.208694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.209002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.209009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.209312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.209319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.209666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.209673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.209941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.209948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.210197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.210203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.210535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.210542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.210849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.210856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.211155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.211162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.211485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.211492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.211817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.211824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.212048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.212056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.212372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.212379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.212689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.212695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.213002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.213008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.213317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.213324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.213526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.213533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.213827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.213834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.214046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.214053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.214208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.214215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.214543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.214551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.214846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.214853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.014 [2024-11-06 10:25:56.215159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.014 [2024-11-06 10:25:56.215166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.014 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.215359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.215366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.215636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.215647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.215854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.215865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.216161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.216167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.216525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.216532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.216854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.216863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.217150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.217158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.217441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.217448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.217723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.217737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.218034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.218041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.218369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.218376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.218578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.218585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.218904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.218911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.219226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.219233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.219540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.219547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.219874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.219881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.220186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.220193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.220507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.220513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.220715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.220722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.221029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.221036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.221250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.221256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.221591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.221598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.221923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.221930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.222254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.222260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.222566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.222573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.222873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.222881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.223170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.223178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.223475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.223482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.223793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.223800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.224099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.224106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.224423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.224430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.224755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.224762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.225057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.225064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.225383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.225390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.225698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.225704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.225855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.225865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.226132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.015 [2024-11-06 10:25:56.226139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.015 qpair failed and we were unable to recover it. 00:33:53.015 [2024-11-06 10:25:56.226512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.226520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.226818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.226826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.227130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.227138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.227450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.227457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.227769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.227778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.228091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.228099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.228289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.228297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.228684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.228691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.228999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.229013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.229322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.229329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.229623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.229630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.230006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.230013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.230184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.230191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.230497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.230504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.230829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.230836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.231119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.231126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.231434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.231441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.231746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.231753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.232064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.232072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.232388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.232394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.232670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.232677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.232974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.232981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.233292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.233299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.233603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.233610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.233886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.233893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.234194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.234200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.234502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.234509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.234820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.234827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.235123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.235137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.235358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.235365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.235694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.235701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.236016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.236023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.236288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.236295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.236577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.236583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.236859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.236869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.016 [2024-11-06 10:25:56.237086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.016 [2024-11-06 10:25:56.237093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.016 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.237415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.237422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.237737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.237743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.237952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.237959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.238035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.238043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.238240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.238247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.238594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.238600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.238888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.238896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.239214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.239221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.239495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.239503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.239813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.239820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.240108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.240116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.240422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.240429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.240739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.240746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.240931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.240944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.241307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.241315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.241619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.241626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.242010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.242018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.242322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.242328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.242626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.242633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.242936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.242944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.243264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.243270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.243459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.243466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.243757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.243764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.243948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.243955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.244267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.244273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.244590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.244597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.244929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.244936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.245247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.245254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.245566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.245572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.245885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.245892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.246203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.246209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.246518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.246525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.246735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.246741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.247013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.247020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.017 [2024-11-06 10:25:56.247203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.017 [2024-11-06 10:25:56.247211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.017 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.247519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.247526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.247806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.247813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.248096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.248103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.248390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.248397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.248721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.248728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.249035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.249043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.249224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.249232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.249423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.249429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.249619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.249625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.249877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.249885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.250062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.250071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.250381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.250387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.250574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.250581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.250916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.250925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.251228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.251236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.251570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.251577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.251851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.251870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.252090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.252096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.252402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.252409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.252718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.252725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.252918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.252925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.253277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.253284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.253566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.253573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.253769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.253776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.254097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.254104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.254420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.254426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.254746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.254753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.255061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.255068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.255375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.255382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.255686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.255693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.255976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.018 [2024-11-06 10:25:56.255984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.018 qpair failed and we were unable to recover it. 00:33:53.018 [2024-11-06 10:25:56.256301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.256308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.256469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.256477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.256782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.256790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.257159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.257166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.257456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.257464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.257789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.257796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.258200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.258208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.258420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.258427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.258754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.258761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.258938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.258947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.259250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.259258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.259558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.259566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.259879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.259888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.260197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.260204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.260369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.260376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.260590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.260597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.260779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.260787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.260980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.260987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.261197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.261204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.261503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.261509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.261809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.261816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.262154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.262161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.262472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.262480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.262781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.262787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.263083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.263091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.263394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.263401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.263699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.263705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.264012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.264019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.264409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.264416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.264739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.264745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.265073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.265080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.265390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.265397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.265688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.265695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.266008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.266015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.266335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.266341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.266632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.266639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.266985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.266992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.267275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.267282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.267471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.267478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.267746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.267753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.268035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.268042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.019 qpair failed and we were unable to recover it. 00:33:53.019 [2024-11-06 10:25:56.268368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.019 [2024-11-06 10:25:56.268376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.268658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.268665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.268866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.268873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.269223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.269231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.269544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.269550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.269840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.269848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.270149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.270157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.270482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.270489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.270707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.270714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.270958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.270965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.271259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.271267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.271575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.271581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.271952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.271960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.272252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.272259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.272589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.272595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.272822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.272829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.272995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.273003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.273305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.273312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.273641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.273648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.273936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.273943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.274268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.274275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.274590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.274601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.274898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.274905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.275237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.275244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.275551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.275557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.275868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.275875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.276199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.276206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.276499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.276506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.276721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.276727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.276906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.276914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.277251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.277258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.277538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.277545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.277873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.277880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.278165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.278172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.278492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.278498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.278579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.278586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.278869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.278876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.279029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.279037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.279320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.279327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.279616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.279623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.020 [2024-11-06 10:25:56.279795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.020 [2024-11-06 10:25:56.279803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.020 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.280117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.280125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.280445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.280452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.280791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.280799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.281107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.281114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.281433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.281441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.281592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.281601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.281870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.281878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.282187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.282195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.282474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.282481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.282799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.282806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.283008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.283016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.283321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.283328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.283629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.283636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.283953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.283960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.284161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.284168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.284502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.284509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.284684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.284691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.284970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.284977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.285293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.285300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.285629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.285635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.285928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.285936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.286258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.286266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.286459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.286467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.286767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.286775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.287079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.287087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.287409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.287417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.287608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.287614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.287932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.287939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.288340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.288347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.288657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.288664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.288982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.288989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.289183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.289190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.289523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.289530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.289844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.289852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.290089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.290096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.290416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.021 [2024-11-06 10:25:56.290423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.021 qpair failed and we were unable to recover it. 00:33:53.021 [2024-11-06 10:25:56.290749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.290756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.291055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.291064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.291377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.291385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.291600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.291608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.291903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.291910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.292234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.292240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.292552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.292560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.292863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.292871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.293291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.293298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.293592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.293600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.293798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.293805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.293972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.293981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.294320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.294327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.294638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.294646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.294980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.294987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.295291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.295298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.295605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.295612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.295796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.295803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.296088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.296096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.296384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.296391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.296684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.296691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.297007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.297014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.297312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.297318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.297659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.297666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.297954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.297961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.298284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.298291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.298468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.298475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.298894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.298901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.299203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.299211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.299507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.299514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.299709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.299716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.300045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.300052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.300230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.300238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.300414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.300421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.300735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.300743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.300920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.300928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.301225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.301233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.301537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.301543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.301838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.301846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.302149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.302156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.302444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.302451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.022 [2024-11-06 10:25:56.302617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.022 [2024-11-06 10:25:56.302626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.022 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.302974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.302982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.303278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.303286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.303597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.303604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.303776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.303784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.304070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.304078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.304273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.304280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.304489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.304496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.304825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.304831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.305168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.305176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.305480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.305488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.305794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.305801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.306100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.306107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.306400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.306407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.306721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.306728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.307030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.307038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.307235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.307242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.307533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.307540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.307837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.307844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.308121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.308129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.308380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.308388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.308664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.308672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.309044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.309051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.309332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.309339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.309657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.309664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.309848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.309855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.310227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.310234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.310541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.310548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.310857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.310870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.311163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.311170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.311376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.311384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.311670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.311677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.311985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.311992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.312293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.312300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.312577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.312584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.312908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.312915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.313226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.313232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.313536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.313543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.313860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.313871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.314185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.314192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.314491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.023 [2024-11-06 10:25:56.314499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.023 qpair failed and we were unable to recover it. 00:33:53.023 [2024-11-06 10:25:56.314814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.314821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.315117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.315134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.315454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.315461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.315642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.315649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.315732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.315739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.316041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.316049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.316244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.316252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.316542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.316550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.316717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.316724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.316899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.316909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.317091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.317098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.317257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.317265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.317479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.317486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.317808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.317815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.318117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.318124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.318411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.318418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.318729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.318736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.319030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.319037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.319365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.319372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.319662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.319669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.319981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.319988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.320142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.320149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.320393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.320400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.320706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.320713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.321025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.321032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.321319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.321326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.321538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.321545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.321728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.321735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.321903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.321911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.322201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.322208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.322396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.322402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.322748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.322755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.322938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.322945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.323134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.323141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.323357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.323364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.323661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.323670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.323871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.323880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.324053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.324060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.324246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.324253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.324480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.324488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.324669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.324677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.024 [2024-11-06 10:25:56.324972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.024 [2024-11-06 10:25:56.324979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.024 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.325328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.325335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.325489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.325496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.325777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.325785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.325944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.325951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.326137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.326144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.326460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.326466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.326802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.326809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.327155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.327164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.327472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.327478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.327780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.327787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.328061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.328069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.328260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.328267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.328558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.328565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.328864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.328871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.329178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.329185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.329386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.329394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.329743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.329749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.329950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.329957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.330205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.330219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.330450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.330456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.330809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.330816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.331032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.331039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.331234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.331241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.331588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.331595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.331995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.332002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.332270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.332278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.332620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.332627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.332946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.332954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.333286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.333294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.333493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.333500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.333849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.333856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.334172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.334179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.334507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.334515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.334816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.334823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.335141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.335148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.335466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.025 [2024-11-06 10:25:56.335473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.025 qpair failed and we were unable to recover it. 00:33:53.025 [2024-11-06 10:25:56.335774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.335780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.335981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.335988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.336253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.336260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.336336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.336343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.336654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.336661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.336859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.336868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.337207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.337214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.337548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.337555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.337870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.337877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.338045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.338053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.338369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.338376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.338548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.338557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.338799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.338806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.339137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.339145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.339195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.339202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.339500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.339507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.339784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.339791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.340003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.340010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.340286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.340293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.340643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.340650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.340966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.340974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.341310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.341316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.341639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.341645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.341903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.341911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.342219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.342226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.342602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.342610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.342782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.342791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.343105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.343113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.343304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.343311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.343586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.343594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.343775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.343782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.344089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.344096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.344408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.344415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.344615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.344622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.344956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.344963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.345259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.345266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.345631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.345638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.345939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.345953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.346286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.346293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.346478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.346485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.026 qpair failed and we were unable to recover it. 00:33:53.026 [2024-11-06 10:25:56.346854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.026 [2024-11-06 10:25:56.346860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.347160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.347167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.347468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.347475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.347773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.347780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.348178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.348186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.348458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.348464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.348768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.348775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.349099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.349107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.349424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.349431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.349750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.349757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.350124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.350131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.350423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.350432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.350791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.350798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.350989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.350996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.351278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.351285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.351423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.351431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.351699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.351707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.351989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.351996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.352168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.352175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.352561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.352568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.352879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.352886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.353211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.353218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.353521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.353529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.353857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.353866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.354023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.354029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.354372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.354379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.354682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.354689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.354985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.354992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.355289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.355296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.355643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.355649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.355932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.355940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.356272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.356278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.356475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.356481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.356778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.356785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.357080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.357087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.357248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.357255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.357514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.357522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.357693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.357700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.357957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.357965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.358292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.358299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.358484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.358491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.027 [2024-11-06 10:25:56.358831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.027 [2024-11-06 10:25:56.358838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.027 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.359153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.359160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.359471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.359479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.359790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.359797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.360127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.360134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.360461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.360469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.360796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.360804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.361100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.361108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.361411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.361419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.361726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.361733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.362043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.362051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.362360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.362367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.362565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.362572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.362869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.362876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.363177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.363183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.363489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.363495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.363806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.363813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.364100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.364107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.364477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.364484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.364769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.364777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.364977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.364985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.365301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.365308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.365507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.365514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.365837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.365844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.366132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.366139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.366452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.366459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.366783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.366789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.367072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.367079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.367406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.367413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.367723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.367730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.367843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.367850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.368175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.368182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.368488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.368495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.368800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.368808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.369121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.369128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.369436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.369443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.369747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.369754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.369912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.369920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.370204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.370211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.370418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.370425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.370737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.370744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.371057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.028 [2024-11-06 10:25:56.371064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.028 qpair failed and we were unable to recover it. 00:33:53.028 [2024-11-06 10:25:56.371358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.371365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.371671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.371677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.371966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.371974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.372306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.372314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.372605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.372612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.372917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.372924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.373133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.373139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.373450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.373457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.373781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.373790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.374096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.374103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.374348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.374355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.374674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.374680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.374831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.374839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.375181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.375189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.375531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.375537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.375839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.375846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.376156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.376163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.376446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.376454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.376761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.376767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.377076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.377083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.377418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.377424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.377725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.377732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.377984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.377992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.378323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.378330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.378614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.378621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.378927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.378933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.379269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.379275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.379583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.379589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.379886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.379893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.380176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.380183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.380464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.380471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.380766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.380774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.381078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.381086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.381389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.381396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.381691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.381698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.381990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.381997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.382318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.382325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.382634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.382640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.029 [2024-11-06 10:25:56.382949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.029 [2024-11-06 10:25:56.382956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.029 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.383265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.383272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.383557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.383565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.383869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.383876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.384187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.384194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.384507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.384513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.384804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.384811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.384993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.385001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.385184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.385190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.385457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.385464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.385767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.385775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.386056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.386064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.386375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.386382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.386699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.386706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.387033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.387040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.387330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.387338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.387639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.387646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.387933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.387940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.388269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.388276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.388585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.388591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.388769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.388775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.389068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.389075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.389388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.389395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.389701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.389708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.390017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.390024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.390210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.390217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.390541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.390549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.390850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.390857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.391203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.391211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.391514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.391521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.391806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.391814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.392028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.392035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.392344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.392351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.392672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.392679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.392957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.392964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.393285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.393291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.393584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.393591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.393895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.393902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.394201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.394216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.394415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.394421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.030 [2024-11-06 10:25:56.394609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.030 [2024-11-06 10:25:56.394616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.030 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.394932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.394938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.395265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.395272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.395593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.395599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.395904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.395912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.396125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.396132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.396327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.396334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.396599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.396606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.396955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.396962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.397288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.397295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.397567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.397577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.397873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.397880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.398198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.398205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.398514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.398521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.398673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.398681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.398945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.398952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.399264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.399272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.399514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.399521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.399834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.399841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.400043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.400052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.400268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.400276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.400445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.400451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.400788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.400795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.401152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.401159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.401323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.401331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.401398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.401405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.401680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.401686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.401981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.401989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.402190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.402198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.402511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.402518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.402829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.402837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.403157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.403164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.403447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.403454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.403762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.403769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.404120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.404127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.404307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.404314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.404581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.404589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.404889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.404897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.405057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.405065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.405340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.031 [2024-11-06 10:25:56.405346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.031 qpair failed and we were unable to recover it. 00:33:53.031 [2024-11-06 10:25:56.405592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.405599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.405771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.405779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.405958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.405967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.406294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.406301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.406572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.406580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.406891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.406898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.407209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.407217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.407583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.407590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.407893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.407900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.408194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.408201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.408479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.408488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.408812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.408819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.409131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.409138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.409419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.409427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.409733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.409740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.410030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.410037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.410345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.410353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.410661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.410667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.410975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.410983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.411292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.411299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.411478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.411486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.411779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.411786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.412070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.412078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.412276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.412282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.412596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.412603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.412928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.412935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.413232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.413239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.413421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.413429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.413732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.413739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.414025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.414033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.414331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.414338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.414626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.414633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.414936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.414943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.415194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.415201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.415468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.415475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.415767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.415775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.415836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.415844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.416137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.416144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.416434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.416442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.416726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.416733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.417029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.417036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.032 qpair failed and we were unable to recover it. 00:33:53.032 [2024-11-06 10:25:56.417227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.032 [2024-11-06 10:25:56.417234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.417525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.417532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.417861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.417871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.418182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.418190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.418498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.418505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.418816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.418824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.419137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.419144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.419439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.419446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.419620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.419628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.419831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.419841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.420046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.420054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.420335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.420343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.420657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.420665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.420978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.420986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.421289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.421296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.421465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.421472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.421782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.421789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.422077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.422084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.422395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.422402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.422705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.422712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.423028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.423035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.423331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.423338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.423533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.423540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.423855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.423865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.424147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.424154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.424459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.424466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.424774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.424781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.425094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.425101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.425411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.425418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.425727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.425734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.426030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.426038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.426234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.426241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.426458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.426464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.426742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.426750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.427037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.427044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.427345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.427352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.427640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.427648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.427926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.427933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.428262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.428269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.428564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.428571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.428892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.428900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.429202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.033 [2024-11-06 10:25:56.429209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.033 qpair failed and we were unable to recover it. 00:33:53.033 [2024-11-06 10:25:56.429517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.429524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.429851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.429857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.430137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.430145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.430467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.430474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.430787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.430794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.431100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.431107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.431418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.431425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.431606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.431614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.431931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.431939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.432224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.432232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.432535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.432542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.432686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.432693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.432852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.432859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.433152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.433159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.433494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.433502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.433806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.433813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.434102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.434110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.434401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.434408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.434705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.434713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.434876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.434884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.435181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.435188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.435512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.435519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.435830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.435837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.436126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.436133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.436340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.436347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.436682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.436690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.436997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.437004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.437221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.437228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.437391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.437398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.437655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.437662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.437868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.437875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.438096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.438103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.438464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.438471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.438781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.438788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.439105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.439115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.439419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.034 [2024-11-06 10:25:56.439426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.034 qpair failed and we were unable to recover it. 00:33:53.034 [2024-11-06 10:25:56.439737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.439744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.440033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.440040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.440348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.440356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.440638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.440645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.441046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.441054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.441350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.441357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.441687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.441694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.442004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.442011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.442328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.442335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.442648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.442655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.443024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.443032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.443328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.443336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.443647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.443655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.443935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.443942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.444258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.444265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.444573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.444580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.444868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.444876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.445201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.445209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.445496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.445504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.445802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.445809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.446119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.446126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.446434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.446441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.446766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.446774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.447083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.447091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.447287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.447295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.447454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.447461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.447750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.447757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.448084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.448091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.448376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.448384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.448693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.448699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.448991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.448999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.449316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.449322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.449527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.449533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.449867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.449875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.450192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.450199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.450509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.450516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.450895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.450902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.451180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.451193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.451491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.451499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.451805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.451812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.452109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.035 [2024-11-06 10:25:56.452116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.035 qpair failed and we were unable to recover it. 00:33:53.035 [2024-11-06 10:25:56.452426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.452433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.452764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.452772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.452964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.452972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.453298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.453304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.453610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.453617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.453879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.453886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.454109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.454117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.454420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.454427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.454720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.454728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.455029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.455037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.455344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.455351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.455665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.455672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.455868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.455875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.456221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.456227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.456548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.456555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.456860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.456872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.457164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.457171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.457423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.457430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.457736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.457743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.458047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.458054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.458343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.458351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.458641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.458649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.458946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.458953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.459270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.459277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.459586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.459593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.459782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.459789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.460117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.460124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.460295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.460303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.460602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.460609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.460939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.460947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.461261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.461267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.461468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.461475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.461823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.461830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.462159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.462166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.462492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.462499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.462684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.462692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.463006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.463013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.463308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.463317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.463623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.463630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.464015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.464022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.036 [2024-11-06 10:25:56.464248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.036 [2024-11-06 10:25:56.464255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.036 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.464559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.464565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.464846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.464853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.465159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.465166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.465539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.465547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.465838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.465845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.466178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.466185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.466471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.466479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.466788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.466795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.467095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.467103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.467490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.467498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.467802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.467810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.468127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.468136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.468434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.468442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.468745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.468753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.469035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.469044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.469252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.469260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.469559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.469566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.469873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.469880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.470094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.470100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.470305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.470312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.470524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.470530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.470834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.470840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.471056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.471063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.471352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.471359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.471653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.471660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.471984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.471991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.472292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.472299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.472621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.472628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.472917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.472924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.473237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.473243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.473525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.473531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.473835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.473842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.474177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.474185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.474375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.474382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.474665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.474673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.474976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.474984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.475276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.475285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.475596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.475602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.475891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.475898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.476208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.476215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.476544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.037 [2024-11-06 10:25:56.476552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.037 qpair failed and we were unable to recover it. 00:33:53.037 [2024-11-06 10:25:56.476857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.476867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.477156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.477163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.477469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.477476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.477777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.477784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.478087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.478094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.478403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.478410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.478720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.478727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.479019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.479026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.479316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.479323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.479635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.479642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.479957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.479964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.480254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.480260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.480473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.480480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.480648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.480656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.480962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.480969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.481261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.481269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.481530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.481537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.481697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.481704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.481989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.481997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.482331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.482337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.482661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.482667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.483031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.483038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.483352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.483359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.483551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.483557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.483932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.483939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.484253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.484260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.484564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.484572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.484867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.484875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.485182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.485188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.485497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.485504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.485815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.485822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.486190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.486197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.486520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.486527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.486840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.486847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.487142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.487150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.487462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.487471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.487786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.038 [2024-11-06 10:25:56.487793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.038 qpair failed and we were unable to recover it. 00:33:53.038 [2024-11-06 10:25:56.488098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.488105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.488413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.488420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.488710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.488716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.489026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.489033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.489353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.489359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.489645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.489652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.489930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.489937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.490249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.490256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.490463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.490470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.490739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.490746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.491082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.491089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.491372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.491379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.491688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.491695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.491986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.491993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.492299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.492306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.492527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.492534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.492811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.492819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.493005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.493012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.493358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.493365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.039 [2024-11-06 10:25:56.493670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.039 [2024-11-06 10:25:56.493678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.039 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.493985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.493993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.494289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.494298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.494591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.494597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.494754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.494761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.495081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.495088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.495361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.495368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.495666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.495673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.495999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.496006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.496325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.496332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.496641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.496648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.496970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.496977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.497287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.497294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.497596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.497603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.497934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.497942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.498275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.498281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.498603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.498611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.498912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.320 [2024-11-06 10:25:56.498919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-11-06 10:25:56.499231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.499238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.499526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.499535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.499922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.499929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.500252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.500259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.500568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.500575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.500728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.500736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.501019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.501027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.501197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.501204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.501518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.501525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.501811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.501819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.502118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.502125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.502412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.502420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.502729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.502737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.503040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.503047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.503318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.503325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.503713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.503719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.504002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.504009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.504223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.504230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.504496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.504503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.504828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.504835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.505145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.505152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.505444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.505451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.505740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.505748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.505947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.505954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.506327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.506333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.506677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.506683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.506883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.506890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.507196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.507202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.507538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.507545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.507869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.507875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.508183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.508189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.508485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.508491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.508802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.508808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.321 [2024-11-06 10:25:56.509113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.321 [2024-11-06 10:25:56.509120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.321 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.509435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.509442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.509590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.509598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.509899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.509906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.510281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.510289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.510614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.510622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.510933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.510941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.511137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.511145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.511338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.511347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.511640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.511647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.511864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.511873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.512155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.512163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.512360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.512368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.512579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.512586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.512807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.512816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.513112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.513120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.513435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.513442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.513708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.513716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.514039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.514047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.514372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.514380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.514690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.514697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.514901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.514909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.515188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.515196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.515257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.515265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.515613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.515621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.515904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.515913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.516082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.516090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.516290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.516298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.516592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.516600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.516949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.516956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.517310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.517317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.517515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.517522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.517707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.517714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.518017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.518024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.518374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.518382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.518678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.518686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.518894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.518902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.519170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.519178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.519494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.322 [2024-11-06 10:25:56.519502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.322 qpair failed and we were unable to recover it. 00:33:53.322 [2024-11-06 10:25:56.519811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.519818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.520289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.520297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.520504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.520512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.520686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.520694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.520984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.520992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.521055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.521062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.521357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.521364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.521757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.521765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.521970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.521978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.522284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.522293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.522469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.522476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.522775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.522783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.523082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.523090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.523404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.523412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.523617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.523624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.523926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.523935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.524145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.524152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.524433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.524441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.524776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.524784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.524984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.524992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.525259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.525267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.525576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.525584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.525754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.525762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.525949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.525957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.526267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.526275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.526539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.526547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.526868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.526876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.527051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.527059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.527266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.323 [2024-11-06 10:25:56.527273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.323 qpair failed and we were unable to recover it. 00:33:53.323 [2024-11-06 10:25:56.527572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.527580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.527877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.527885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.528056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.528064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.528351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.528358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.528638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.528645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.528837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.528845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.529020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.529028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.529302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.529310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.529461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.529469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.529662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.529670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.529955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.529963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.530303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.530311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.530619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.530627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.530923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.530931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.531109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.531117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.531420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.531429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.531746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.531754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.532141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.532149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.532194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.532201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.532471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.532479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.532680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.532690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.532981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.532989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.533162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.533170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.533535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.533543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.533874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.533883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.534147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.534155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.534347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.534355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.534524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.534532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.534834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.534842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.535222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.535229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.535514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.535522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.535835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.535842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.536230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.536238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.536550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.536557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.536732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.536739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.537083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.537091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.537295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.537303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.324 [2024-11-06 10:25:56.537607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.324 [2024-11-06 10:25:56.537614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.324 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.537899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.537906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.538206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.538213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.538628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.538636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.538828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.538836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.539139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.539146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.539463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.539470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.539661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.539669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.540045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.540053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.540262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.540269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.540576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.540583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.540895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.540902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.541269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.541276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.541474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.541481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.541689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.541695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.541740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.541747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.542047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.542054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.542365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.542373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.542681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.542688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.542994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.543001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.543174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.543182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.543473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.543480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.543756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.543762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.543926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.543935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.544100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.544107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.544427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.325 [2024-11-06 10:25:56.544434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.325 qpair failed and we were unable to recover it. 00:33:53.325 [2024-11-06 10:25:56.544583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.544589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.544813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.544820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.545127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.545135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.545317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.545325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.545486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.545494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.545813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.545821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.546109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.546116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.546412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.546420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.546727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.546734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.547047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.547054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.547165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.547172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.547448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.547455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.547751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.547759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.548058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.548065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.548364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.548378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.548693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.548700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.549004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.549011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.549222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.549230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.549425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.549432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.549744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.549751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.550088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.550095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.550394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.550401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.550714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.550721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.550882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.550889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.551102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.551110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.551419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.551426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.551478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.551486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.551786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.551793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.551980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.551988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.552284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.552292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.552602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.552609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.552915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.552922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.553090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.553098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.553406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.553413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.553743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.553750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.553993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.326 [2024-11-06 10:25:56.554000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.326 qpair failed and we were unable to recover it. 00:33:53.326 [2024-11-06 10:25:56.554290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.554297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.554650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.554658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.554949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.554956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.555287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.555293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.555466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.555473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.555664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.555671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.555937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.555944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.556178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.556185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.556364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.556372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.556646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.556653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.556917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.556924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.557139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.557147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.557321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.557329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.557476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.557482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.557669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.557676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.557899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.557906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.558198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.558205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.558472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.558478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.558823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.558829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.558983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.558991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.559097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.559103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.559370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.559377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.559606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.559613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.559928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.559935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.560113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.560120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.560427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.560433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.560755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.560762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.561087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.561094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.561417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.561424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.561581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.561588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.561837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.561844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.562021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.562028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.562404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.562412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.562761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.562769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.563094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.563101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.563407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.563415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.563827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.563834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.327 qpair failed and we were unable to recover it. 00:33:53.327 [2024-11-06 10:25:56.564175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.327 [2024-11-06 10:25:56.564183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.564515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.564523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.564717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.564724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.565031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.565038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.565394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.565404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.565723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.565731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.566127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.566134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.566331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.566338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.566671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.566678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.567029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.567037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.567322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.567329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.567674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.567682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.567980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.567987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.568301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.568308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.568620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.568627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.568942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.568949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.569186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.569193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.569545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.569552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.569867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.569875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.570180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.570186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.570518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.570525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.570602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.570608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.570901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.570908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.571284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.571292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.571615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.571622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.571943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.571951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.572206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.572213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.572506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.572513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.572692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.572700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.328 qpair failed and we were unable to recover it. 00:33:53.328 [2024-11-06 10:25:56.573009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.328 [2024-11-06 10:25:56.573017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.573194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.573201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.573495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.573503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.573822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.573829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.573996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.574004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.574359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.574366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.574743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.574751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.575121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.575128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.575466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.575474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.575803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.575810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.576117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.576125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.576454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.576462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.576782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.576791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.576981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.576990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.577328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.577336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.577647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.577655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.577833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.577841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.578190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.578198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.578408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.578416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.578779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.578787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.579076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.579084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.579258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.579265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.579464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.579472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.579780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.579787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.580079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.580087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.580425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.580433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.580767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.580775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.581105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.581113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.581299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.581307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.581594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.581602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.581796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.581804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.582085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.582093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.582399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.582407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.582613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.582621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.582849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.582857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.583158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.583166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.583460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.583468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.583759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.583767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.329 [2024-11-06 10:25:56.583970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.329 [2024-11-06 10:25:56.583978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.329 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.584263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.584271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.584578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.584586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.584901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.584909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.585221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.585229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.585393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.585401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.585587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.585594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.585917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.585924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.586270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.586277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.586584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.586591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.586903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.586910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.587222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.587229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.587554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.587561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.587864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.587872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.588182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.588189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.588481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.588489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.588782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.588789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.589111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.589119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.589306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.589312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.589647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.589654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.589994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.590001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.590276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.590283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.590585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.590592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.590906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.590913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.591219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.591225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.591536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.591543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.591733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.591739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.592038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.592045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.592345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.592352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.592664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.592671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.592981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.592988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.593307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.330 [2024-11-06 10:25:56.593314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.330 qpair failed and we were unable to recover it. 00:33:53.330 [2024-11-06 10:25:56.593609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.593616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.593929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.593937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.594259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.594265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.594557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.594564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.594753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.594760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.595090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.595097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.595250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.595258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.595559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.595566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.595938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.595945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.596254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.596261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.596588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.596594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.596912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.596920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.597250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.597258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.597546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.597553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.597872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.597879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.598194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.598200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.598492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.598499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.598812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.598818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.599115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.599122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.599275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.599282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.599563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.599570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.599682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.599689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.599999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.600006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.600304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.600311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.600598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.600606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.600897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.600905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.601215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.601221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.601504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.601512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.601835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.601842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.602076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.602083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.602399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.602406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.602716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.602723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.603039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.603046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.603263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.603270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.603565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.603572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.603898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.603905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.604208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.604214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.604503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.604510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.331 [2024-11-06 10:25:56.604816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.331 [2024-11-06 10:25:56.604823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.331 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.605130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.605138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.605439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.605445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.605738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.605745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.606045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.606052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.606368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.606375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.606568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.606575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.606880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.606887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.607212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.607218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.607549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.607556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.607869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.607876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.608209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.608215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.608538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.608546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.608851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.608859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.609154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.609163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.609456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.609463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.609652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.609659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.609956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.609963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.610281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.610288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.610611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.610618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.610906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.610913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.611232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.611239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.611550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.611557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.611935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.611942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.612260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.612267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.612571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.612578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.612782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.612788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.613061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.613069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.613382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.613389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.613683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.613689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.614006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.614013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.614169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.614176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.614509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.614516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.614839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.614846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.614886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.614893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.615203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.615210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.615474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.615481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.615782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.615788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.615975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.332 [2024-11-06 10:25:56.615983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.332 qpair failed and we were unable to recover it. 00:33:53.332 [2024-11-06 10:25:56.616308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.616315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.616462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.616469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.616745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.616752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.616956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.616963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.617175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.617182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.617467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.617475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.617568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.617575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Write completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 Read completed with error (sct=0, sc=8) 00:33:53.333 starting I/O failed 00:33:53.333 [2024-11-06 10:25:56.618305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.333 [2024-11-06 10:25:56.618738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.618794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb59c000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.619213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.619314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb59c000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.619663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.619672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.619993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.620000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.620302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.620309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.620618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.620624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.620785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.620793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.621022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.621029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.621348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.621355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.621654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.621662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.622009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.622016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.622298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.622305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.622618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.622624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.622839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.622846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.623063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.623080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.623369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.623376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.623689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.623696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.623996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.624003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.624300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.624307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.624621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.624629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.333 qpair failed and we were unable to recover it. 00:33:53.333 [2024-11-06 10:25:56.624820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.333 [2024-11-06 10:25:56.624827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.625115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.625122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.625442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.625449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.625750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.625757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.626085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.626093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.626388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.626394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.626777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.626784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.627067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.627074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.627395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.627402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.627688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.627695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.628020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.628027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.628345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.628352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.628650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.628658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.628925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.628932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.629253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.629260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.629450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.629464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.629787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.629794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.630086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.630094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.630411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.630418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.630720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.630727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.630995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.631002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.631337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.631346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.631656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.631662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.632036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.632044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.632338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.632345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.632636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.632643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.632806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.632814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.632995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.633003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.633316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.633323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.633608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.633616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.633967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.633975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.634251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.634258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.634565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.634572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.634872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.634879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.635204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.635211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.635521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.635528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.635823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.635829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.635993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.636000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.334 qpair failed and we were unable to recover it. 00:33:53.334 [2024-11-06 10:25:56.636222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.334 [2024-11-06 10:25:56.636229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.636567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.636574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.636884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.636891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.637171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.637177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.637488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.637495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.637701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.637709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.638011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.638018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.638312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.638319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.638430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.638436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.638759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.638766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.639095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.639102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.639411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.639418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.639727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.639734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.640120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.640127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.640384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.640391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.640709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.640715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.641017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.641024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.641330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.641337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.641490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.641498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.641702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.641709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.642066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.642073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.642371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.642379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.642695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.642703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.643012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.643020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.643307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.643314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.643596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.643603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.643881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.643888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.644044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.644052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.644369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.644375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.644664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.644671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.644986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.644993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.645301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.645308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.645635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.645642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.335 qpair failed and we were unable to recover it. 00:33:53.335 [2024-11-06 10:25:56.645884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-11-06 10:25:56.645891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.646228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.646235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.646531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.646537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.646888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.646895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.647163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.647170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.647480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.647487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.647801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.647807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.648125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.648132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.648440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.648446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.648759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.648765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.649082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.649089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.649379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.649386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.649571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.649579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.649901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.649908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.650213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.650220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.650503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.650511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.650797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.650804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.651100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.651108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.651416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.651422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.651779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.651787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.652102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.652109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.652417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.652424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.652738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.652744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.653054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.653061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.653220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.653228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.653489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.653496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.653798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.653805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.654097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.654104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.654410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.654417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.654726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.654733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.655040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.655049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.655397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.655404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.655708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.655715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.655924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.655931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.656211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.656218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.656538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.656544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.656860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.656870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.657189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.657196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.336 [2024-11-06 10:25:56.657526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-11-06 10:25:56.657533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.336 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.657834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.657840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.658149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.658156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.658354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.658361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.658637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.658644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.659048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.659055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.659373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.659380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.659697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.659704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.660018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.660026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.660346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.660353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.660661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.660668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.660980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.660987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.661293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.661300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.661626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.661632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.661944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.661952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.662268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.662275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.662589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.662595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.662925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.662932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.663241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.663248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.663446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.663453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.663714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.663721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.664007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.664014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.664314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.664322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.664664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.664671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.664975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.664982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.665273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.665280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.665600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.665606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.665819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.665826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.666195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.666202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.666508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.666516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.666820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.666826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.666976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.666984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.667252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.667267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.667552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.667559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.667797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.667804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.668125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.668133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.668436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.668444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.668732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.668740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.668929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.337 [2024-11-06 10:25:56.668937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.337 qpair failed and we were unable to recover it. 00:33:53.337 [2024-11-06 10:25:56.669145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.669152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.669329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.669336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.669466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.669473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.669693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.669700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.670013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.670020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.670346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.670353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.670555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.670562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.670938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.670945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.671153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.671160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.671509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.671516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.671678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.671686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.672041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.672047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.672381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.672389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.672584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.672592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.672972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.672979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.673267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.673275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.673581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.673587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.673908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.673916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.674243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.674250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.674562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.674569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.674876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.674884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.675191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.675198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.675569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.675576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.675815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.675822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.676138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.676145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.676453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.676460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.676625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.676632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.676933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.676940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.677209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.677216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.677612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.677620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.677958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.677965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.678284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.678291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.678601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.678607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.678803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.678810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.679096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.679103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.679420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.679426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.679738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.679745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.680049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.680056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.338 [2024-11-06 10:25:56.680381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.338 [2024-11-06 10:25:56.680387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.338 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.680698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.680705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.680901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.680909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.681222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.681228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.681522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.681529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.681740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.681746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.681901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.681908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.682190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.682198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.682525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.682531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.682912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.682920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.683274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.683280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.683555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.683563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.683857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.683870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.684152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.684159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.684508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.684515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.684797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.684804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.685141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.685148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.685359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.685365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.685558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.685565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.685877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.685885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.686189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.686196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.686507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.686514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.686782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.686790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.687098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.687105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.687390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.687397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.687679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.687686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.687997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.688005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.688309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.688317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.688644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.688650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.688959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.688966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.689290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.689297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.689463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.689470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.689794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.689801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.690005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.339 [2024-11-06 10:25:56.690013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.339 qpair failed and we were unable to recover it. 00:33:53.339 [2024-11-06 10:25:56.690322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.690329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.690613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.690621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.690950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.690957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.691259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.691266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.691555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.691561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.691826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.691833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.692027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.692035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.692220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.692227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.692516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.692523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.692840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.692847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.693017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.693025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.693303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.693309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.693529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.693536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.693816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.693822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.694146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.694153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.694422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.694429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.694664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.694671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.694982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.694990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.695324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.695331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.695639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.695646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.695977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.695984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.696278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.696285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.696606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.696613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.696924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.696932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.697218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.697224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.697506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.697514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.697819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.697827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.698007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.698015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.698119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.698128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.698374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.698382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.698656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.698664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.698958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.698965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.699287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.699294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.699600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.699608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.699925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.699932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.700044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.700051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.700351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.700358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.700688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.700695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.340 [2024-11-06 10:25:56.701012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.340 [2024-11-06 10:25:56.701019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.340 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.701335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.701342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.701653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.701660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.701979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.701986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.702298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.702305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.702604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.702610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.702701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.702707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.702972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.702980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.703329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.703335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.703648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.703656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.703969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.703976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.704295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.704302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.704612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.704619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.704929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.704937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.705256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.705264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.705574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.705582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.705886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.705895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.706191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.706198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.706491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.706506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.706808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.706815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.707132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.707140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.707402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.707409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.707626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.707632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.707947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.707954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.708277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.708285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.708602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.708609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.708940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.708948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.709341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.709348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.709627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.709641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.709939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.709946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.710239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.710249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.710555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.710562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.710741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.710749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.711110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.711118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.711299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.711307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.711571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.711577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.711875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.711882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.712071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.712078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.712412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.341 [2024-11-06 10:25:56.712419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.341 qpair failed and we were unable to recover it. 00:33:53.341 [2024-11-06 10:25:56.712727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.712733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.713034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.713041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.713353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.713360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.713657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.713664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.713991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.713999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.714302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.714310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.714643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.714651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.714815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.714822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.715191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.715199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.715525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.715533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.715841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.715848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.716068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.716076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.716380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.716387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.716696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.716703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.716892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.716899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.717191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.717198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.717405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.717412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.717708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.717715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.718028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.718035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.718369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.718375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.718565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.718573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.718800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.718807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.719098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.719105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.719482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.719489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.719829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.719836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.720099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.720106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.720417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.720425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.720727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.720734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.721153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.721160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.721456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.721463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.721791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.721798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.722089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.722099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.722388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.722395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.722709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.722716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.722895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.722903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.723200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.723207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.723410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.342 [2024-11-06 10:25:56.723417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.342 qpair failed and we were unable to recover it. 00:33:53.342 [2024-11-06 10:25:56.723728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.723735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.724043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.724050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.724371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.724377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.724686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.724693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.725003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.725010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.725326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.725333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.725624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.725631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.725823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.725830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.726133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.726140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.726431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.726438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.726758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.726765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.727081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.727088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.727314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.727321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.727649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.727656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.727832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.727839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.728191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.728198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.728513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.728520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.728732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.728738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.728928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.728936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.729164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.729172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.729437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.729444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.729728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.729736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.730024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.730031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.730299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.730306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.730669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.730677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.730984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.730991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.731316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.731323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.731637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.731645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.731947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.731955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.732247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.732254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.732591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.732598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.732897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.732905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.733222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.733229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.733502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.733509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.733813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.733823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.734112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.734120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.734435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.734442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.734749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.734757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.343 qpair failed and we were unable to recover it. 00:33:53.343 [2024-11-06 10:25:56.735109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.343 [2024-11-06 10:25:56.735117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.735429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.735437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.735747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.735755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.736040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.736048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.736359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.736367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.736710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.736717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.737010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.737024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.737332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.737339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.737742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.737749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.738084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.738092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.738298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.738304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.738569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.738576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.738830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.738837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.739145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.739152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.739359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.739367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.739695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.739703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.740035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.740042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.740354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.740361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.740669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.740676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.740986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.740993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.741320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.741327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.741534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.741541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.741856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.741866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.742238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.742245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.742536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.742543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.742851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.742859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.743160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.743167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.743440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.743447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.743775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.743783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.744110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.744117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.744426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.744433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.744738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.744746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.745148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.745156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.745417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.745424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.344 [2024-11-06 10:25:56.745747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.344 [2024-11-06 10:25:56.745754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.344 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.745997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.746005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.746309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.746319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.746625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.746632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.746934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.746942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.747247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.747255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.747588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.747596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.747748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.747757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.748035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.748042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.748338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.748345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.748637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.748644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.748956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.748963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.749286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.749293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.749447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.749454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.749872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.749879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.750169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.750176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.750494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.750501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.750812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.750819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.751034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.751041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.751360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.751367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.751468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.751475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.751736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.751744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.752037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.752045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.752343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.752351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.752698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.752705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.752983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.752990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.753356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.753363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.753655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.753662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.753983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.753990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.754303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.754311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.754616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.754623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.754811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.754818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.755149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.755156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.755540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.755547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.755873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.755880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.756212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.756219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.756515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.756523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.756842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.756850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.757161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.345 [2024-11-06 10:25:56.757168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.345 qpair failed and we were unable to recover it. 00:33:53.345 [2024-11-06 10:25:56.757453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.757460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.757762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.757769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.758083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.758091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.758389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.758398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.758699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.758708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.759018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.759025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.759324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.759332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.759635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.759642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.759927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.759934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.760146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.760153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.760469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.760476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.760796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.760803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.761115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.761122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.761431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.761438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.761636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.761643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.761844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.761851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.762171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.762178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.762341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.762349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.762674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.762681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.762985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.762994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.763330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.763338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.763691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.763699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.764002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.764010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.764238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.764246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.764559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.764566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.764875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.764883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.765204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.765211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.765496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.765503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.765818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.765825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.766149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.766156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.766365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.766373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.766709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.766716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.767013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.767027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.767233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.767239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.767552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.767559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.767872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.767880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.768161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.768169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.768495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.768502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.346 qpair failed and we were unable to recover it. 00:33:53.346 [2024-11-06 10:25:56.768539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.346 [2024-11-06 10:25:56.768546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.768902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.768909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.769253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.769261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.769541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.769548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.769865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.769872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.770090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.770100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.770395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.770409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.770726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.770734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.770916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.770924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.771201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.771209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.771512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.771519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.771807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.771815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.772133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.772140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.772321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.772329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.772589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.772597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.772982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.772989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.773286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.773293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.773605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.773613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.773779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.773787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.774069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.774077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.774363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.774370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.774694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.774701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.774903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.774910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.775220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.775228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.775571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.775578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.775929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.775937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.776238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.776246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.776556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.776564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.776880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.776889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.777155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.777162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.777474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.777481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.777911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.777918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.778213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.778221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.778537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.778543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.778748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.778756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.779009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.779017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.779215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.779222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.779524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.779531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.347 [2024-11-06 10:25:56.779827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.347 [2024-11-06 10:25:56.779833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.347 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.780133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.780140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.780452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.780460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.780615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.780623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.780803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.780809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.781120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.781127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.781417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.781424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.781727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.781736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.782038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.782046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.782357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.782363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.782692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.782700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.783012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.783020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.783308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.783316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.783631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.783639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.783975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.783983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.784233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.784240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.784561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.784568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.784774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.784781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.785066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.785073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.785376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.785384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.785555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.785563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.785877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.785885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.786043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.786050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.786342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.786349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.786702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.786710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.787056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.787063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.787395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.787403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.787591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.787598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.787802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.787808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.788110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.788117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.788402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.788408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.788725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.788733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.788945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.788953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.789312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.789319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.348 [2024-11-06 10:25:56.789649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.348 [2024-11-06 10:25:56.789656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.348 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.789964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.789971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.790303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.790310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.790610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.790617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.790927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.790934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.791276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.791283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.791489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.791496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.791774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.791782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.792077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.792084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.792381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.792388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.792699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.792706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.793124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.793131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.793432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.793439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.793557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.793566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.793768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.793778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.793959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.793966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.794303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.794310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.794631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.794640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.794940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.794948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.795291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.795299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.795629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.795636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.795943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.795951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.796288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.796295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.796618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.796626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.796837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.796843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.797054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.797061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.797330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.797338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.797636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.797643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.798019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.798026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.798409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.798424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.798737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.798744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.799029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.799037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.799338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.799345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.799657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.799664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.799960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.799968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.800271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.800279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.800579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.800586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.800904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.800911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.349 [2024-11-06 10:25:56.801235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.349 [2024-11-06 10:25:56.801242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.349 qpair failed and we were unable to recover it. 00:33:53.629 [2024-11-06 10:25:56.801462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.629 [2024-11-06 10:25:56.801470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.629 qpair failed and we were unable to recover it. 00:33:53.629 [2024-11-06 10:25:56.801773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.629 [2024-11-06 10:25:56.801781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.629 qpair failed and we were unable to recover it. 00:33:53.629 [2024-11-06 10:25:56.802100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.629 [2024-11-06 10:25:56.802108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.629 qpair failed and we were unable to recover it. 00:33:53.629 [2024-11-06 10:25:56.802408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.629 [2024-11-06 10:25:56.802417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.629 qpair failed and we were unable to recover it. 00:33:53.629 [2024-11-06 10:25:56.802585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.629 [2024-11-06 10:25:56.802594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.629 qpair failed and we were unable to recover it. 00:33:53.629 [2024-11-06 10:25:56.802898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.629 [2024-11-06 10:25:56.802906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.629 qpair failed and we were unable to recover it. 00:33:53.629 [2024-11-06 10:25:56.803239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.629 [2024-11-06 10:25:56.803245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.629 qpair failed and we were unable to recover it. 00:33:53.629 [2024-11-06 10:25:56.803461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.629 [2024-11-06 10:25:56.803468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.629 qpair failed and we were unable to recover it. 00:33:53.629 [2024-11-06 10:25:56.803751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.629 [2024-11-06 10:25:56.803758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.629 qpair failed and we were unable to recover it. 00:33:53.629 [2024-11-06 10:25:56.803938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.803946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.804243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.804250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.804440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.804448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.804774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.804781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.805160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.805168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.805476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.805485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.805866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.805873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.806077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.806084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.806425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.806432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.806761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.806768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.807078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.807085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.807402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.807409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.807743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.807753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.808154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.808163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.808469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.808477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.808665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.808672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.808989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.808997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.809192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.809200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.809395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.809403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.809703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.809711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.809936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.809946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.810240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.810249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.810650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.810658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.810846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.810855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.811141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.811150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.811459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.811468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.811784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.811792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.812099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.812108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.812398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.812407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.812674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.812683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.812964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.812973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.813168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.813176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.813490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.813498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.813814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.813823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.813975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.813985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.814303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.814311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.814489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.814499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.814814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.814822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.815107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.815115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.815415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.815423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.815652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.815661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.815940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.815949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.816231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.816239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.816576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.816584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.816902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.816910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.817129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.817139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.817358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.817366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.817687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.817695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.817967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.817976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.818272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.818280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.818491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.818500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.818788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.818796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.819114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.819123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.819171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.819178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.819477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.819485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.819820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.819828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.820215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.820223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.820414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.820422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.820599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.820607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.820904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.820913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.630 qpair failed and we were unable to recover it. 00:33:53.630 [2024-11-06 10:25:56.821188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.630 [2024-11-06 10:25:56.821195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.821464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.821473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.821797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.821805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.822133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.822142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.822332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.822340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.822631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.822639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.822943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.822952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.823170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.823179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.823520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.823528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.823833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.823841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.824184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.824193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.824383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.824391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.824623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.824631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.824949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.824958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.825279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.825287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.825451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.825460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.825791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.825800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.826207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.826216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.826549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.826557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.826882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.826891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.827206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.827215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.827521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.827529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.827843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.827851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.828156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.828165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.828493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.828502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.828797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.828807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.829091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.829100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.829423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.829431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.829638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.829648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.829872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.829881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.830192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.830200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.830522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.830531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.830749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.830758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.830948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.830955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.831239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.831247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.831661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.831669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.831983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.831991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.832191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.832200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.832563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.832573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.832798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.832806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.833180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.833189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.833546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.833556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.833878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.833886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.834075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.834083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.834386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.834395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.834695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.834704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.834923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.834931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.835193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.835201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.835474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.835481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.835810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.835819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.836120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.836129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.836434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.836442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.836752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.836762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.837065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.837073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.837389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.837397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.837708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.837716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.838026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.838035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.838351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.838359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.838685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.838694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.839093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.839102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.839418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.839426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.839756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.839765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.840072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.840081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.840383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.840391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.840696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.840704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.840993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.841002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.841243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.841252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.841441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.631 [2024-11-06 10:25:56.841450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.631 qpair failed and we were unable to recover it. 00:33:53.631 [2024-11-06 10:25:56.841794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.841802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.842115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.842123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.842329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.842338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.842648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.842657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.842977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.842986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.843160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.843167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.843454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.843463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.843815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.843824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.844020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.844029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.844229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.844237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.844558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.844566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.844873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.844882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.845075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.845083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.845371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.845379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.845544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.845552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.845839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.845848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.846173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.846182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.846372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.846379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.846524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.846533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.846913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.846921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.847841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.847860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.848164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.848174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.848469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.848478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.848767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.848776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.849103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.849115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.849458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.849468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.849782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.849791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.849989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.849998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.850285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.850292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.850612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.850620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.850954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.850964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.851299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.851307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.851615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.851624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.851930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.851939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.852122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.852130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.852753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.852769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.853074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.853084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.853248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.853258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.853944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.853964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.854273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.854283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.854549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.854558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.855285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.855301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.855619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.855628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.855923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.855931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.856221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.856228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.856433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.856441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.856739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.856749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.856983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.856992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.857397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.857411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.857724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.857734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.858050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.858059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.858367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.858377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.858669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.858677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.859027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.859041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.859354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.859363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.859670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.859678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.859996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.860005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.860258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.860266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.860577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.860587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.860885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.860894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.861206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.861220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.861547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.861556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.861777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.861785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.862090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.862100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.862428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.632 [2024-11-06 10:25:56.862437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.632 qpair failed and we were unable to recover it. 00:33:53.632 [2024-11-06 10:25:56.862754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.862767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.862969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.862977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.863163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.863172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.863374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.863383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.863719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.863728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.864028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.864037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.864348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.864361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.864412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.864421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.864582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.864591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.864920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.864929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.865249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.865258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.865477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.865485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.865883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.865898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.866210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.866218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.866396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.866405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.866704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.866714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.866928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.866937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.867212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.867221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.867547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.867555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.867828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.867836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.868138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.868146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.868463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.868472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.868761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.868769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.869058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.869066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.869387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.869396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.869745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.869754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.870075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.870083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.870297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.870305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.870564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.870572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.870842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.870850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.871140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.871150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.871456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.871465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.871772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.871781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.872082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.872091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.872413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.872422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.872615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.872624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.872891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.872900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.873198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.873206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.873534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.873543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.873850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.873860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.874156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.874165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.874347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.874356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.874649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.874656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.874977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.874986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.875308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.875317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.875652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.875660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.875976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.875985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.876320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.876329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.876654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.876663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.876940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.876949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.877287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.877296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.877574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.877582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.877889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.877898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.878210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.878218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.878542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.878551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.878864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.878872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.879097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.879105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.879412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.879420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.879605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.879613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.879858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.879870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.879910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.879918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.880210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.880218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.880546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.880556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.880882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.880892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.881095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.881104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.881429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.881438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.881773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.881782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.882095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.882104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.882283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.882291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.882583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.882591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.882907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.882916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.883263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.633 [2024-11-06 10:25:56.883271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.633 qpair failed and we were unable to recover it. 00:33:53.633 [2024-11-06 10:25:56.883554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.883562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.883886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.883895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.884081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.884089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.884400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.884409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.884712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.884721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.884914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.884923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.885206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.885214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.885518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.885533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.885734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.885743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.886056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.886064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.886373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.886382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.886707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.886716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.887031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.887040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.887402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.887410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.887741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.887750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.888053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.888062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.888401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.888410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.888680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.888688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.889004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.889013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.889320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.889329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.889600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.889608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.889882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.889892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.890193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.890201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.890525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.890534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.890681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.890690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.890859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.890872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.891225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.891235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.891409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.891418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.891689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.891698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.892029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.892039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.892262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.892271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.892597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.892606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.892934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.892942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.893266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.893276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.893581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.893589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.893786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.893794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.894009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.894018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.894317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.894325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.894647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.894656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.894971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.894980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.895277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.895286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.895590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.895599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.895911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.895921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.896262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.896270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.896455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.896463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.896735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.896744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.896917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.896926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.897181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.897191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.897471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.897480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.897639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.897647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.897865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.897873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.898161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.898169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.898380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.898387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.898687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.898695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.898982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.898990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.899332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.899340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.899662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.899671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.899974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.899983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.900308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.900316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.900621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.900629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.900918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.900927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.901244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.901252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.901556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.901564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.901872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.901880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.902266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.634 [2024-11-06 10:25:56.902274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.634 qpair failed and we were unable to recover it. 00:33:53.634 [2024-11-06 10:25:56.902451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.902459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.902756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.902765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.903089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.903098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.903386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.903394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.903679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.903688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.903994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.904002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.904311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.904320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.904604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.904613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.904939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.904948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.905218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.905226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.905531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.905540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.905676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.905685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.905959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.905968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.906167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.906176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.906482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.906490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.906793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.906803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.907102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.907111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.907416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.907424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.907608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.907616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.907932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.907940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.908125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.908133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.908314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.908322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.908628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.908638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.908950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.908959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.909228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.909236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.909542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.909551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.909856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.909868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.910155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.910164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.910366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.910374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.910702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.910710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.911005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.911015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.911314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.911322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.911630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.911639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.911943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.911952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.912279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.912289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.912598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.912606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.912927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.912937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.913233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.913241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.913547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.913555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.913846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.913855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.914140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.914148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.914451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.914460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.914767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.914776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.915082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.915091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.915241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.915249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.915520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.915528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.915787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.915795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.916107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.916116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.916423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.916432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.916753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.916762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.917081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.917090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.917378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.917386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.917691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.917699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.918006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.918014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.918323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.918331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.918625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.918633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.918940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.918949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.919255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.919264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.919577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.919586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.919912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.919921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.920238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.920246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.920819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.920836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.921148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.921161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.921446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.921455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.921762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.921771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.922080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.922089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.922393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.922402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.922727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.922735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.923076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.923085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.923396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.923405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.635 [2024-11-06 10:25:56.923710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.635 [2024-11-06 10:25:56.923719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.635 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.923999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.924008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.924331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.924340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.924679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.924696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.925048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.925057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.925382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.925390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.925699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.925707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.926017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.926025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.926349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.926359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.926659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.926667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.926983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.926993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.927262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.927270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.927615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.927624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.927940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.927948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.928264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.928272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.928587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.928596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.928878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.928887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.929181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.929190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.929371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.929380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.929713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.929721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.930034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.930042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.930356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.930365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.930544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.930552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.930875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.930885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.931198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.931207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.931500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.931508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.931821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.931830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.932041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.932049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.932371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.932380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.932667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.932675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.932987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.932996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.933323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.933331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.933637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.933648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.933953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.933961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.934265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.934273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.934456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.934464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.934747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.934755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.935055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.935064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.935384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.935393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.935693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.935701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.936026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.936036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.936369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.936377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.936682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.936690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.936910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.936918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.937185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.937193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.937409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.937418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.937723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.937733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.937892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.937902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.938241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.938249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.938384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.938392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.938588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.938595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.938903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.938912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.939358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.939366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.939695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.939703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.939842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.939852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.940082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.940090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.940392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.940400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.940703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.940711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.941038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.941048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.941410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.941418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.941729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.941738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.942051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.942061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.942367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.942375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.942678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.942687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.942988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.942997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.943295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.943304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.943511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.943520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.943739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.943747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.943831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.943840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.944050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.944058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.944375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.944384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.636 [2024-11-06 10:25:56.944702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.636 [2024-11-06 10:25:56.944711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.636 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.945014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.945024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.945324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.945333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.945623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.945631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.946018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.946027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.946236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.946244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.946576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.946585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.946881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.946889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.947241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.947249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.947566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.947575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.947926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.947934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.948160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.948167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.948461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.948470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.948792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.948800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.949102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.949111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.949309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.949317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.949614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.949623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.949929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.949938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.950213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.950221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.950527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.950537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.950694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.950704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.950909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.950918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.951227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.951235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.951535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.951544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.951859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.951872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.952066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.952075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.952404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.952414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.952748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.952757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.953063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.953073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.953397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.953406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.953725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.953735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.954066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.954075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.954387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.954397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.954711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.954719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.955032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.955041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.955367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.955376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.955682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.955691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.956000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.956008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.956316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.956324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.956615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.956623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.956977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.956986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.957249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.957259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.957480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.957489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.957787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.957795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.958043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.958051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.958306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.958314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.958619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.958628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.958928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.958937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.959261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.959270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.959584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.959592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.959897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.959906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.960242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.960250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.960411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.960419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.960687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.960696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.961115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.961123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.961419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.961428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.961777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.961785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.962068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.962076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.962392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.962401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.962698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.962707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.962938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.962947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.963246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.963254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.963557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.963566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.963894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.963903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.964218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.964226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.637 [2024-11-06 10:25:56.964496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.637 [2024-11-06 10:25:56.964504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.637 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.964808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.964815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.965125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.965134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.965440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.965448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.965760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.965769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.966043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.966051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.966350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.966358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.966665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.966673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.966985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.966993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.967274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.967283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.967567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.967575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.967882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.967891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.968084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.968093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.968418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.968427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.968729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.968737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.968979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.968987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.969293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.969303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.969604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.969613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.969936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.969945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.970268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.970277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.970584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.970592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.970898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.970907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.971330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.971339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.971511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.971519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.971850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.971859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.972163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.972172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.972507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.972516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.972875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.972884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.973083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.973090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.973260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.973269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.973602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.973612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.973915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.973924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.974145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.974153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.974458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.974466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.974757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.974765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.975056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.975065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.975387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.975395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.975696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.975705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.976024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.976032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.976333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.976341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.976633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.976641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.976928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.976937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.977258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.977266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.977576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.977585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.977883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.977892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.978208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.978217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.978506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.978515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.978716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.978724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.979026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.979035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.979356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.979365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.979730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.979739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.980038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.980047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.980333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.980343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.980638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.980646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.980964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.980972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.981276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.981284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.981589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.981600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.981905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.981914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.982120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.982129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.982437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.982445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.982760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.982768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.983098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.983107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.983403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.983412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.983709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.983717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.984027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.984036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.984434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.984442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.984602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.984610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.984926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.984935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.985298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.985306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.985468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.985477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.985812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.638 [2024-11-06 10:25:56.985821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.638 qpair failed and we were unable to recover it. 00:33:53.638 [2024-11-06 10:25:56.986108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.986117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.986410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.986419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.986614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.986624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.986929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.986938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.987291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.987299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.987608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.987616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.987921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.987930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.988234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.988242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.988552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.988562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.988827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.988836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.989135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.989143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.989321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.989330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.989644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.989655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.989852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.989860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.990197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.990206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.990534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.990542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.990848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.990857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.991142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.991150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.991454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.991462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.991641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.991650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.991852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.991864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.992135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.992143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.992408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.992416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.992738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.992746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.993038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.993046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.993205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.993214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.993477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.993486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.993777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.993785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.994048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.994056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.994380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.994388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.994692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.994701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.994991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.995000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.995313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.995321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.995471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.995481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.995753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.995761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.996062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.996071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.996413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.996422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.996730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.996738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.997084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.997093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.997421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.997431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.997632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.997641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.997958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.997967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.998279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.998288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.998596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.998604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.998907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.998916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.999234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.999243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.999581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.999590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:56.999904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:56.999913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.000212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.000220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.000526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.000535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.000849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.000857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.001149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.001158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.001478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.001489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.001798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.001807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.002122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.002131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.002417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.002425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.002776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.002785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.003092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.003100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.003423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.003432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.003731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.003740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.004041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.004050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.004410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.004418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.004766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.004775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.005007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.005015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.639 qpair failed and we were unable to recover it. 00:33:53.639 [2024-11-06 10:25:57.005271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.639 [2024-11-06 10:25:57.005279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.005585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.005594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.005898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.005909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.006095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.006103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.006381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.006389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.006656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.006666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.006970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.006978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.007286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.007296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.007610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.007619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.007927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.007936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.008216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.008225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.008533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.008542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.008841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.008849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.009126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.009134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.009445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.009453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.009791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.009799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.009936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.009945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.010241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.010249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.010578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.010586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.010775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.010784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.011150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.011158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.011459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.011469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.011772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.011781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.012075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.012084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.012361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.012370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.012719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.012728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.013032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.013041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.013357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.013366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.013674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.013685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.013849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.013859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.014175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.014184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.014474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.014483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.014798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.014807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.015112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.015121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.015426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.015435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.015759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.015768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.016079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.016089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.016269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.016278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.016554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.016563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.016849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.016858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.017189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.017199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.017547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.017557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.017858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.017875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.018180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.018189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.018503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.018511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.018707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.018716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.019037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.019047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.019345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.019353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.019639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.019647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.019930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.019939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.020249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.020258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.020428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.020437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.020654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.020661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.020840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.020848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.021135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.021144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.021476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.021484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.021817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.021825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.021989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.021998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.022323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.022332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.022619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.022628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.022958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.022966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.023293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.023301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.023654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.023663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.024005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.024014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.024354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.024363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.024667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.024675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.024989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.024998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.025279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.025287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.025592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.025602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.640 [2024-11-06 10:25:57.025878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.640 [2024-11-06 10:25:57.025886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.640 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.026212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.026221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.026424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.026432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.026777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.026785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.026950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.026959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.027296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.027304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.027587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.027596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.027821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.027830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.028188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.028197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.028509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.028518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.028819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.028828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.029139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.029148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.029332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.029341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.029611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.029620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.029927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.029936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.030304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.030312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.030618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.030626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.030928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.030937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.031252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.031261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.031571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.031579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.031870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.031878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.032178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.032187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.032478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.032486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.032679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.032687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.032982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.032990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.033173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.033181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.033371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.033379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.033574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.033581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.033905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.033913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.034094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.034102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.034389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.034398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.034703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.034713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.034982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.034991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.035190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.035199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.035483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.035491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.035678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.035685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.035995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.036004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.036272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.036280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.036581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.036590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.036901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.036911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.037200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.037208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.037528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.037538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.037724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.037733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.038028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.038036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.038333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.038341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.038665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.038673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.038999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.039008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.039343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.039352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.039552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.039561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.039858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.039870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.040185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.040193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.040497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.040506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.040861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.040873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.041069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.041077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.041380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.041389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.041701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.041709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.042021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.042029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.042202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.042209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.042486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.042495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.042797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.042805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.043101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.043119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.043305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.043314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.043511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.043519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.043868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.043877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.044162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.044170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.044493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.044501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.044886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.044895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.045224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.045232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.045538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.045546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.045751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.045759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.045926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.045935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.046267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.046275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.046578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.046587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.046908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.641 [2024-11-06 10:25:57.046917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.641 qpair failed and we were unable to recover it. 00:33:53.641 [2024-11-06 10:25:57.047316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.047325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.047511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.047519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.047797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.047804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.048119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.048127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.048436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.048444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.048730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.048741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.049051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.049060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.049384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.049392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.049691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.049700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.049999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.050009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.050197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.050205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.050523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.050531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.050846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.050854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.051163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.051172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.051474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.051483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.051795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.051803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.052103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.052111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.052402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.052410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.052692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.052702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.053064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.053073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.053377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.053386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.053760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.053769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.054043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.054053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.054373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.054381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.054672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.054681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.054994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.055003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.055278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.055286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.055598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.055606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.055792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.055801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.055995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.056003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.056198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.056208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.056482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.056490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.056807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.056817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.057008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.057016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.057363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.057370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.057652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.057661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.057985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.057994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.058283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.058291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.058596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.058604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.058779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.058788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.059059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.059067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.059353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.059361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.059688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.059697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.060002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.060011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.060309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.060318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.060496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.060506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.060710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.060719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.060910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.060918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.061105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.061113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.061419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.061428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.061503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.061510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.061732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.061740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.062029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.062038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.062366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.062374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.062692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.062700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.063025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.063034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.063173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.063181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.063534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.063542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.063868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.063877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.064192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.064201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.064516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.064524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.064844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.064852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.065057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.065065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.642 qpair failed and we were unable to recover it. 00:33:53.642 [2024-11-06 10:25:57.065340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.642 [2024-11-06 10:25:57.065349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.065645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.065654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.065721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.065729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.066090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.066099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.066429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.066439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.066748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.066757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.067042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.067051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.067357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.067366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.067673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.067682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.067994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.068002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.068310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.068319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.068684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.068692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.068882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.068890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.069182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.069190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.069365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.069372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.069680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.069689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.069996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.070005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.070175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.070182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.070520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.070528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.070701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.070709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.071006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.071015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.071341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.071349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.071661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.071671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.071868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.071878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.072219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.072227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.072454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.072462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.072773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.072782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.072963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.072973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.073260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.073269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.073589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.073598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.073917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.073930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.074293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.074301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.074627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.074635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.074930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.074939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.075322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.075332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.075538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.075545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.075845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.075853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.076137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.076146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.076547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.076556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.076857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.076869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.077160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.077168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.077481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.077490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.077804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.077813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.078123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.078132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.078466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.078475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.078788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.078797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.078881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.078890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.079190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.079199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.079542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.079551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.079866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.079876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.080082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.080091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.080433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.080442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.080753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.080762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.081072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.081082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.081406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.081415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.081567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.081577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.081869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.081881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.082151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.082159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.082423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.082432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.082744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.082753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.083169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.083178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.083494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.083502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.083799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.083809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.084125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.084133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.084450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.084459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.084795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.084803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.085101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.085109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.085427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.085436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.085758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.085767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.085939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.085948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.643 [2024-11-06 10:25:57.086255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.643 [2024-11-06 10:25:57.086264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.643 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.086487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.086495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.086744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.086751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.087134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.087143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.087477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.087486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.087806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.087814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.088050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.088059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.088381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.088389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.088752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.088759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.088919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.088927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.089175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.089184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.089510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.089518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.089834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.089843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.090043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.090051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.090242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.090250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.090539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.090547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.090868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.090877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.091099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.091107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.091333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.091342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.091639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.091647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.091942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.091950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.092255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.092263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.092607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.092616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.092919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.092929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.093265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.093274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.093439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.093447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.093757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.093765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.093942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.093950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.094106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.094114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.094446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.094454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.094759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.094769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.095090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.095098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.095394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.095404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.095754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.095762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.095914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.095923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.096241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.096249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.096582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.096591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.096904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.096913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.097212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.097220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.097389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.097399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.097590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.097598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.097860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.097871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.098138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.098146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.098436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.098445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.098772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.098780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.098966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.098974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.099293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.099301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.099493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.099502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.099828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.099837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.100135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.100145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.100456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.100464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.100749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.100758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.101066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.101075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.101403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.101413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.101729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.101737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.102033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.102041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.102339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.102348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.102662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.102671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.102869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.102877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.103174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.103182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.103501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.103510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.103798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.103807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.104105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.104114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.104309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.104317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.104575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.104584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.104902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.104911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.105119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.105127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.105297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.644 [2024-11-06 10:25:57.105305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.644 qpair failed and we were unable to recover it. 00:33:53.644 [2024-11-06 10:25:57.105605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.105614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.105875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.105883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.106157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.106167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.106377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.106385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.106720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.106731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.107039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.107047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.107376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.107385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.107706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.107714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.107977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.107986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.108315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.108323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.108685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.108693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.108852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.108860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.109173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.109182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.109434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.109442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.109762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.109772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.110052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.110061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.110432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.110441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.110741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.110750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.111040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.111048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.111415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.111424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.111710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.111717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.112042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.112051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.645 [2024-11-06 10:25:57.112369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.645 [2024-11-06 10:25:57.112378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.645 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.112686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.112696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.113011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.113020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.113340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.113349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.113650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.113659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.113967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.113976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.114328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.114337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.114639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.114647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.115263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.115280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.115586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.115596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.116220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.116236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.116543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.116553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.116858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.116871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.920 [2024-11-06 10:25:57.117159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.920 [2024-11-06 10:25:57.117167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.920 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.117462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.117471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.117673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.117681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.117944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.117953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.118287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.118296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.118618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.118626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.118964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.118973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.119321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.119331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.119633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.119642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.119930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.119942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.120099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.120108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.120383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.120392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.120711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.120720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.121058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.121067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.121375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.121386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.121681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.121690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.121997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.122005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.122285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.122294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.122613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.122622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.122927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.122936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.123258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.123265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.123451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.123459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.123642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.123649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.123932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.123941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.124283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.124291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.124502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.124510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.124798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.124806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.125021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.125030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.125305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.125313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.125592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.125601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.125777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.125785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.126083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.126092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.126400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.126408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.126720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.126729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.126892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.126902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.127203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.127211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.127416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.127424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.127743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.127751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.128035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.128043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.128346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.128354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.128662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.128671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.129001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.129010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.129274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.129282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.129592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.129600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.129906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.129915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.130100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.130108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.130416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.130425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.130593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.130601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.130879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.130887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.131198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.131207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.131514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.131523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.131839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.131847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.132056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.132064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.132365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.132373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.132674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.132683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.132992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.133001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.133318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.133327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.133515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.133523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.133925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.133933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.134241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.134249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.134564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.134573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.134875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.134883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.135228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.135236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.135523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.135531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.135868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.135876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.136154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.136162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.136468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.136477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.136648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.136656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.136837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.136846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.921 [2024-11-06 10:25:57.137152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.921 [2024-11-06 10:25:57.137160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.921 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.137462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.137471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.137801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.137809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.138117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.138126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.138418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.138426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.138732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.138740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.138892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.138902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.139209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.139219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.139520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.139529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.139711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.139720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.140107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.140116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.140425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.140432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.140763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.140772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.141075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.141084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.141462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.141471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.141769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.141778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.142083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.142093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.142286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.142295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.142610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.142619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.142922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.142931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.143262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.143271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.143450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.143458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.143747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.143756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.144034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.144043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.144407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.144416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.144717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.144726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.145032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.145041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.145344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.145354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.145690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.145699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.146007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.146016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.146303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.146311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.146578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.146586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.146894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.146903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.147233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.147242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.147545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.147554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.147854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.147866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.148176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.148186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.148538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.148546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.148852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.148864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.149142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.149151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.149450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.149459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.149767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.149775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.150073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.150081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.150384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.150392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.150719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.150728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.151037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.151046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.151230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.151238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.151535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.151545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.151882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.151891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.152199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.152207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.152515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.152524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.152830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.152839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.153173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.153182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.153489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.153498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.153807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.153816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.154124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.154133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.154464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.154473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.154780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.154789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.155098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.155108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.155414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.155422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.155607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.155615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.155927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.155935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.155975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.155982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.156249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.156257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.156584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.156592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.156913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.156922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.157181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.157189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.157503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.157512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.157813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.922 [2024-11-06 10:25:57.157821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.922 qpair failed and we were unable to recover it. 00:33:53.922 [2024-11-06 10:25:57.158014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.158023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.158346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.158354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.158657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.158665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.158969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.158978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.159297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.159306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.159599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.159608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.159915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.159923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.160245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.160253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.160548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.160557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.160872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.160882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.161145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.161153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.161339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.161347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.161637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.161646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.161804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.161814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.162079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.162089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.162388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.162397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.162724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.162734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.163060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.163069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.163337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.163349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.163651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.163660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.163971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.163981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.164290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.164300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.164606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.164615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.164930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.164939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.165240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.165248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.165555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.165564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.165858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.165870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.166145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.166154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.166439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.166447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.166770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.166778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.167094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.167102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.167415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.167424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.167713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.167722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.168030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.168040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.168354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.168364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.168670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.168679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.168859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.168876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.169177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.169185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.169531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.169539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.169844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.169852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.170160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.170169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.170512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.170520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.170821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.170829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.171140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.171149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.171443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.171452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.171756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.171764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.172047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.172056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.172370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.172378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.172673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.172681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.172986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.172994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.173352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.173360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.173528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.173536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.173871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.173880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.174165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.174173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.174517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.174527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.174844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.174852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.175055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.175063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.175332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.175341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.175510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.923 [2024-11-06 10:25:57.175521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.923 qpair failed and we were unable to recover it. 00:33:53.923 [2024-11-06 10:25:57.175765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.175774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.176081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.176090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.176409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.176418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.176725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.176733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.177034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.177042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.177354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.177362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.177661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.177669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.177875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.177884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.178176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.178183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.178506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.178516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.178819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.178829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.179135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.179143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.179448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.179457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.179757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.179766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.180059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.180068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.180367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.180376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.180661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.180670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.180993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.181002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.181193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.181201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.181504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.181513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.181817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.181825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.181994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.182004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.182273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.182281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.182565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.182574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.182877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.182886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.183071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.183079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.183378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.183387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.183699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.183707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.184012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.184020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.184270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.184278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.184603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.184611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.184904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.184912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.185210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.185218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.185504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.185511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.185814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.185823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.185981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.185990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.186252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.186262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.186599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.186608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.186952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.186960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.187284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.187294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.187611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.187620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.187898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.187907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.188219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.188227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.188536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.188545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.188852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.188860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.189041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.189049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.189322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.189330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.189635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.189643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.189813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.189822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.190091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.190099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.190421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.190429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.190731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.190741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.191037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.191046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.191358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.191367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.191669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.191677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.192005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.192014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.192336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.192344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.192629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.192639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.192925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.192933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.193258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.193268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.193571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.193580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.193872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.193882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.194059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.194068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.194394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.194403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.194714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.194722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.195007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.195015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.195330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.195339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.195498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.195506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.195823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.195831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.196133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.196141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.196445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.196453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.924 [2024-11-06 10:25:57.196757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.924 [2024-11-06 10:25:57.196766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.924 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.196961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.196969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.197173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.197181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.197463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.197471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.197843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.197852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.198129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.198138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.198433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.198442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.198753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.198762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.199032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.199042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.199348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.199357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.199526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.199535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.199705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.199714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.200009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.200018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.200302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.200312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.200645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.200653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.200973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.200983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.201321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.201329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.201625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.201634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.201944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.201953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.202265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.202273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.202584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.202592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.202899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.202907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.203245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.203253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.203553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.203562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.203869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.203877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.204175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.204183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.204473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.204480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.204789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.204797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.205101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.205110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.205281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.205289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.205589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.205597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.205912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.205921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.206064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.206073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.206404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.206413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.206736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.206744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.207034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.207043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.207338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.207346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.207651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.207660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.207969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.207977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.208299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.208308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.208687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.208695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.208993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.209009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.209329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.209337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.209645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.209653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.209923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.209938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.210274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.210283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.210503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.210511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.210814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.210823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.211145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.211155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.211323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.211331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.211683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.211693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.212041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.212050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.212354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.212363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.212673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.212681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.212970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.212979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.213299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.213307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.213607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.213616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.213926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.213934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.214268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.214277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.214588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.214596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.214901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.214910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.215233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.215242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.925 qpair failed and we were unable to recover it. 00:33:53.925 [2024-11-06 10:25:57.215521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.925 [2024-11-06 10:25:57.215529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.215830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.215838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.216140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.216149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.216496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.216505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.216799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.216809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.217133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.217141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.217434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.217442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.217759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.217767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.217960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.217968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.218297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.218305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.218513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.218522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.218848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.218857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.219179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.219188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.219498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.219507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.219707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.219716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.219897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.219907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.220112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.220121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.220430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.220440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.220755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.220764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.221051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.221061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.221372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.221381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.221686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.221695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.222070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.222079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.222424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.222433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.222755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.222764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.223066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.223076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.223381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.223392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.223584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.223594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.223872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.223881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.224184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.224193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.224496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.224505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.224670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.224680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.224977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.224987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.225144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.225154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.225458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.225467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.225773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.225782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.226092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.226101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.226432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.226441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.226604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.226614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.226883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.226892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.227176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.227185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.227493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.227502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.227805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.227813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.228016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.228024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.228230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.228237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.228551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.228560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.228866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.228876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.229112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.229120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.229414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.229422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.229731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.229740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.230087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.230095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.230406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.230415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.230701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.230709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.231057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.231066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.231362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.231370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.231676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.231685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.231975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.231983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.232187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.232195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.232478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.232486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.232795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.232803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.233109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.233118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.233423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.233431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.233729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.233738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.234045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.234053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.234381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.234390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.234597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.234605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.234912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.234924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.235238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.235246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.235591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.235600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.926 [2024-11-06 10:25:57.235943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.926 [2024-11-06 10:25:57.235953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.926 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.236289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.236298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.236650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.236658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.236983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.236991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.237296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.237304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.237719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.237727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.238032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.238040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.238363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.238371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.238676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.238685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.238991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.239000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.239307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.239316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.239607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.239616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.239918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.239927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.240241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.240249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.240554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.240563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.240851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.240860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.241194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.241203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.241509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.241517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.241842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.241850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.242167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.242177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.242376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.242385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.242656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.242665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.242974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.242983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.243319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.243327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.243506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.243514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.243800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.243808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.244116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.244125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.244447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.244454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.244764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.244772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.244978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.244987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.245290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.245299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.245624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.245633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.245935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.245944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.246268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.246277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.246579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.246587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.246875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.246883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.247202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.247210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.247518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.247529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.247834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.247842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.248148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.248157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.248462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.248470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.248780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.248789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.248828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.248836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.249113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.249121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.249437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.249445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.249734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.249742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.250044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.250052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.250325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.250333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.250656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.250666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.250976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.250985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.251299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.251308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.251613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.251621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.251908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.251916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.252079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.252089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.252286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.252295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.252598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.252606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.252915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.252924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.253243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.253251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.253546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.253555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.253854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.253867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.254021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.254029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.254354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.254363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.254559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.254567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.254848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.254856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.255124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.255134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.255447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.255455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.255755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.927 [2024-11-06 10:25:57.255764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.927 qpair failed and we were unable to recover it. 00:33:53.927 [2024-11-06 10:25:57.255955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.255964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.256240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.256248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.256513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.256521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.256705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.256713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.257025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.257033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.257353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.257362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.257742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.257750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.258052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.258061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.258358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.258366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.258673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.258681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.258990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.259000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.259160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.259168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.259444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.259453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.259735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.259743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.260026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.260034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.260340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.260348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.260510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.260518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.260850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.260859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.261191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.261199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.261514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.261523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.261833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.261842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.262147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.262156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.262339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.262349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.262544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.262553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.262868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.262878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.263181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.263189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.263351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.263360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.263670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.263679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.263866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.263875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.264161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.264169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.264489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.264498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.264802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.264810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.265220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.265229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.265556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.265564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.265872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.265880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.266188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.266196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.266501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.266510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.266793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.266802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.267178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.267187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.267464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.267472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.267789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.267797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.268102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.268110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.268542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.268551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.268853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.268868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.269178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.269187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.269499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.269508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.269793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.269802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.270085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.270094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.270336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.270344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.270654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.928 [2024-11-06 10:25:57.270662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.928 qpair failed and we were unable to recover it. 00:33:53.928 [2024-11-06 10:25:57.270970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.270980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.271303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.271312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.271623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.271630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.271962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.271971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.272363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.272371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.272675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.272683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.272998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.273006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.273344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.273353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.273659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.273668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.273889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.273897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.274177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.274185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.274472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.274480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.274786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.274794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.275100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.275108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.275417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.275425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.275760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.275769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.276073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.276082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.276388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.276396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.276731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.276739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.277036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.277045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.277250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.277259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.277439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.277447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.277764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.277773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.278066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.278076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.278404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.278413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.278765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.278774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.278980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.278989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.279331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.279340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.279647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.279656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.279964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.279973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.280297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.280306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.280595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.280603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.280809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.280817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.281089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.281098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.281297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.281306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.281537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.281545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.281845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.281854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.282143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.282152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.282459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.282468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.282759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.282767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.282976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.282986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.283249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.283257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.283569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.283578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.283881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.283891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.284266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.284274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.284530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.284538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.284855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.284866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.285177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.285185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.285496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.285505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.285774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.285782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.286078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.286086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.286390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.286398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.286708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.286716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.287032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.287040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.287389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.287398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.287746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.929 [2024-11-06 10:25:57.287755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.929 qpair failed and we were unable to recover it. 00:33:53.929 [2024-11-06 10:25:57.288056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.288065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.288372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.288380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.288664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.288672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.288976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.288985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.289304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.289312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.289630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.289639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.289917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.289926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.290225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.290234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.290545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.290554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.290734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.290742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.291043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.291052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.291369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.291378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.291631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.291639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.291840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.291847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.292135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.292144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.292453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.292461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.292777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.292786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.292980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.292989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.293299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.293308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.293622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.293631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.293925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.293934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.294293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.294302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.294484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.294493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.294782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.294791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.295083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.295092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.295404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.295412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.295718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.295727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.295938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.295946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.296121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.296130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.296336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.296344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.296523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.296532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.296832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.296841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.297039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.297048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.297337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.297346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.297644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.297652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.297947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.297955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.298125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.298133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.298435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.298444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.298797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.298806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.299125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.299134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.299415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.299423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.299628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.299636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.299875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.299885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.300230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.300238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.300438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.300446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.300749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.300757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.301032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.301041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.301358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.301366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.301533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.301542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.301717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.301726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.302039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.302047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.302352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.302361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.302661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.302669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.302984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.302993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.303323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.303332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.303628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.303637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.303835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.303843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.304152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.304162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.304469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.304478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.304782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.930 [2024-11-06 10:25:57.304790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.930 qpair failed and we were unable to recover it. 00:33:53.930 [2024-11-06 10:25:57.305104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.305113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.305183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.305191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.305494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.305503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.305684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.305693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.305857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.305869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.306171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.306180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.306490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.306498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.306806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.306815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.307128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.307137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.307439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.307447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.307762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.307771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.307948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.307957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.308250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.308258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.308497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.308506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.308877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.308886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.309081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.309089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.309374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.309382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.309679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.309688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.309965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.309975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.310283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.310293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.310601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.310609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.310772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.310780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.311060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.311070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.311399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.311407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.311739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.311748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.312078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.312087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.312269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.312278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.312569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.312578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.312873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.312881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.313155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.313164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.313361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.313370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.313511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.313522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.313797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.313807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.314174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.314184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.314497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.314505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.314812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.314820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.315137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.315146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.315452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.315462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.315771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.315780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.316109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.316118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.316317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.316325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.316622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.316631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.316849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.316857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.317030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.317037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.317365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.317373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.317704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.317712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.318012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.318021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.318337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.318346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.318663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.318672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.931 [2024-11-06 10:25:57.318981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.931 [2024-11-06 10:25:57.318990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.931 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.319143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.319153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.319423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.319432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.319736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.319745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.320042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.320052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.320374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.320383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.320692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.320701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.320993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.321002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.321176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.321183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.321567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.321576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.321887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.321896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.322232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.322240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.322551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.322560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.322876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.322886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.323083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.323091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.323368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.323376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.323667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.323675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.323994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.324003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.324319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.324328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.324631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.324639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.324960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.324969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.325279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.325288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.325569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.325579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.325882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.325890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.326154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.326162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.326352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.326360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.326679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.326688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.327019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.327028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.327289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.327298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.327461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.327470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.327663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.327671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.327966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.327976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.328289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.328298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.328618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.328628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.328696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.328705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.328855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.328868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.329158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.329167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.329489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.329497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.329696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.329704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.329974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.329983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.330315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.330323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.330632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.330640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.330955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.330963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.331297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.331305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.331679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.331688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.332004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.332013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.332313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.332321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.332632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.332640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.332836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.332844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.333060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.333068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.333345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.333353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.333534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.333544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.333851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.333859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.334154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.334164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.334355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.334364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.334511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.932 [2024-11-06 10:25:57.334520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.932 qpair failed and we were unable to recover it. 00:33:53.932 [2024-11-06 10:25:57.334811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.334819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.335112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.335121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.335431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.335439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.335716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.335724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.335912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.335921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.336162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.336170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.336450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.336460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.336780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.336788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.337091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.337100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.337418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.337426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.337704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.337712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.338026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.338035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.338321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.338330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.338645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.338653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.338841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.338849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.339170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.339179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.339476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.339484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.339682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.339691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.339971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.339980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.340305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.340314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.340487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.340496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.340794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.340802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.341160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.341169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.341489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.341497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.341833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.341842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.342147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.342156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.342490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.342498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.342846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.342854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.343171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.343179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.343492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.343502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.343804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.343812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.344099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.344107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.344277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.344286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.344536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.344544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.344837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.344845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.345210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.345219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.345522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.345530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.345830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.345838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.346239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.346247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.346548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.346556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.346738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.346746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.346992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.347000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.347317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.347325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.347627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.347635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.347952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.347960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.348268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.348276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.348588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.348599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.348904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.348912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.349263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.349272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.349581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.349589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.349873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.349881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.350187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.350195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.350479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.350488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.350805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.350814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.351011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.351020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.351786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.351806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.352108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.352117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.352879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.352896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.353207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-11-06 10:25:57.353216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.933 qpair failed and we were unable to recover it. 00:33:53.933 [2024-11-06 10:25:57.353819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.353835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.354141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.354152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.354483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.354492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.354811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.354819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.355143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.355151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.355524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.355532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.355833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.355841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.356141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.356150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.359873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.359895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.360232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.360241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.360589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.360600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.360938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.360948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.361276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.361285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.361476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.361486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.361691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.361703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.362042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.362051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.362231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.362241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.362557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.362566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.362898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.362908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.363234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.363249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.363531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.363539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.363856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.363868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.364213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.364241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.364568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.364585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.365096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.365135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.365433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.365446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.365772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.365783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.366101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.366116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.366428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.366439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.366733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.366744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.367039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.367050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.367268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.367277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.367579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.367590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.367913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.367924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.368247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.368257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.368553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.368564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.368896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.368906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.369219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.369230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.369534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.369544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.369867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.369877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.370209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.370219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.370489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.370499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.370809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.370818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.371156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.371167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.371356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.371368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.371509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.371520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.371824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.371834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.372173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.372184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.372527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.372538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.372804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.372813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.373115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.373127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.373432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.373442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.373618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.373628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.373987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.373998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.374289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.374299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.374608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.374618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.374896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.374906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.375225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.375235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.375528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.375539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.375892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-11-06 10:25:57.375902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.934 qpair failed and we were unable to recover it. 00:33:53.934 [2024-11-06 10:25:57.376153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.376163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.376432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.376442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.376648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.376658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.376894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.376911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.377212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.377223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.377515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.377525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.377835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.377845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.378148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.378160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.378499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.378509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.378820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.378830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.379198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.379210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.379510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.379521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.379835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.379845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.380174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.380185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.380521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.380532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.380729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.380739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.381037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.381048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.381105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.381114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.381379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.381390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.381699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.381709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.381980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.381990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.382184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.382195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.382582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.382592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.382898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.382908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.383204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.383213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.383517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.383527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.383835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.383845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.384212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.384222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.384532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.384542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.384874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.384885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.385155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.385166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.385450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.385461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.385668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.385677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.385884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.385894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.386205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.386215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.386431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.386443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.386753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.386762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.387043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.387054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.387361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.387371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.387674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.387684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.387996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.388006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.388320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.388330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.388496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.388505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.388726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.388735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.389031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.389041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.389287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.389297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.389615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.389625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.389943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.389953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.390268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.390277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.390571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.390581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.390873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.390883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.391198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.391208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.391516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.391526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.391832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.391842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.392154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.392165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.392357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.392367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.392671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.392681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.392989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.393006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.393375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.393385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.393718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.393728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.394037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.394048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.394298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.394307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.394623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.394634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.394939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.394949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.395309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.395318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.395623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.395633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.935 [2024-11-06 10:25:57.395934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-11-06 10:25:57.395944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.935 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.396113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.396122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.396391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.396401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.396715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.396724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.397037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.397047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.397352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.397362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.397534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.397545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.397909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.397920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.398238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.398248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.398557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.398567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.398872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.398883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.399073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.399083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.399479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.399502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.399811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.399820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.400281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.400310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.400612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.400622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.401078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.401106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.401289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.401297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.401346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.401355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.401683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.401691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.402008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.402017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.402336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.402343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.402658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.402665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.402972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.402984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.403175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.403182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.403466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.403473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.403780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.403786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.404081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.404088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.404205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.404212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.404479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.404485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.404805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.404812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.404999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.405007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.405434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.405441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.405596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.405604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:53.936 [2024-11-06 10:25:57.405906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.936 [2024-11-06 10:25:57.405913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:53.936 qpair failed and we were unable to recover it. 00:33:54.211 [2024-11-06 10:25:57.406210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.211 [2024-11-06 10:25:57.406219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.211 qpair failed and we were unable to recover it. 00:33:54.211 [2024-11-06 10:25:57.406536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.211 [2024-11-06 10:25:57.406543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.211 qpair failed and we were unable to recover it. 00:33:54.211 [2024-11-06 10:25:57.406825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.211 [2024-11-06 10:25:57.406832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.211 qpair failed and we were unable to recover it. 00:33:54.211 [2024-11-06 10:25:57.407157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.211 [2024-11-06 10:25:57.407164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.211 qpair failed and we were unable to recover it. 00:33:54.211 [2024-11-06 10:25:57.407475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.211 [2024-11-06 10:25:57.407482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.211 qpair failed and we were unable to recover it. 00:33:54.211 [2024-11-06 10:25:57.407797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.211 [2024-11-06 10:25:57.407803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.211 qpair failed and we were unable to recover it. 00:33:54.211 [2024-11-06 10:25:57.408002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.211 [2024-11-06 10:25:57.408009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.211 qpair failed and we were unable to recover it. 00:33:54.211 [2024-11-06 10:25:57.408293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.408300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.408618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.408625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.408923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.408930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.409239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.409247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.409579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.409586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.409900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.409908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.410214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.410221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.410390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.410398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.410730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.410737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.411031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.411039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.411355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.411362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.411749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.411757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.412149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.412157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.412467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.412474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.412761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.412768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.413056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.413063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.413360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.413368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.413544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.413550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.413869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.413877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.414210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.414216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.414503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.414510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.414685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.414694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.414982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.414989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.415390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.415397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.415693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.415701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.416007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.416015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.416310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.416317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.416629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.416636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.416929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.416937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.417260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.417266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.417586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.417593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.417901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.417908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.418213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.418220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.418526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.418532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.418841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.418848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.419128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.419136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.419432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.212 [2024-11-06 10:25:57.419440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.212 qpair failed and we were unable to recover it. 00:33:54.212 [2024-11-06 10:25:57.419646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.419654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.419921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.419928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.420194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.420202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.420529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.420536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.420882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.420890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.421207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.421213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.421516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.421523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.421813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.421820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.422008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.422015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.422352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.422359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.422648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.422656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.423034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.423042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.423347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.423353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.423674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.423681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.424033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.424040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.424329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.424337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.424643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.424649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.424933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.424940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.425239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.425246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.425548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.425555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.425872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.425879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.426199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.426206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.426518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.426525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.426814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.426821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.427116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.427124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.427422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.427430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.427726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.427732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.428026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.428033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.428355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.428361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.428687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.428694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.428851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.428860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.429042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.429050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.429355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.429362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.429641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.429648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.429957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.213 [2024-11-06 10:25:57.429964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.213 qpair failed and we were unable to recover it. 00:33:54.213 [2024-11-06 10:25:57.430303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.430309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.430504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.430510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.430776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.430782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.431061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.431068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.431288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.431295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.431624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.431630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.431937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.431944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.432275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.432282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.432571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.432578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.432901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.432910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.433209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.433216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.433396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.433404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.433655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.433661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.433945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.433952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.434243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.434250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.434471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.434478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.434708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.434715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.435081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.435089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.435402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.435409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.435692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.435699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.436008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.436016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.436324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.436331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.436640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.436647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.436930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.436937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.437241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.437248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.437560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.437567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.437873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.437880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.438164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.438172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.438361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.438368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.438704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.438713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.438929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.438936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.439152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.439159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.439463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.439471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.439772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.439779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.440079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.440087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.440396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.440403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.440689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.440696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.214 [2024-11-06 10:25:57.441010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.214 [2024-11-06 10:25:57.441017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.214 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.441328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.441335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.441667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.441674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.441974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.441981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.442308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.442314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.442523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.442530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.442876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.442884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.443041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.443048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.443334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.443341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.443684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.443691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.444066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.444073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.444396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.444403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.444711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.444717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.445012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.445021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.445326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.445333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.445615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.445622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.445940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.445947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.446262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.446270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.446577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.446584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.446882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.446889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.447209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.447216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.447528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.447535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.447841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.447848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.448065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.448073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.448385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.448392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.448591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.448598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.448903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.448911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.449213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.449221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.449400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.449407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.449592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.449600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.449880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.449887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.450210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.450217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.450500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.450508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.450834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.450841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.451148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.451155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.451446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.451453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.451774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.451781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.452074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.215 [2024-11-06 10:25:57.452081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.215 qpair failed and we were unable to recover it. 00:33:54.215 [2024-11-06 10:25:57.452369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.452377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.452557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.452564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.452762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.452769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.452959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.452967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.453287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.453295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.453579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.453587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.453917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.453924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.454141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.454147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.454524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.454530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.454823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.454830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.455133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.455141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.455454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.455461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.455767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.455774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.456074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.456081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.456397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.456403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.456711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.456718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.457031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.457038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.457335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.457342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.457551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.457558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.457842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.457856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.458184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.458191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.458477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.458484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.458793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.458799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.459108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.459115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.459282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.459290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.459580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.459587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.459838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.459846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.460154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.460161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.460375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.216 [2024-11-06 10:25:57.460382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.216 qpair failed and we were unable to recover it. 00:33:54.216 [2024-11-06 10:25:57.460689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.460696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.460987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.460994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.461319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.461325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.461542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.461549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.461874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.461881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.462186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.462195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.462536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.462542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.462823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.462838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.463138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.463144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.463503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.463510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.463815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.463821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.464117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.464124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.464406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.464419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.464726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.464733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.465028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.465037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.465199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.465207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.465514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.465521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.465816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.465823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.466128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.466136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.466452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.466458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.466785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.466792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.467008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.467015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.467227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.467233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.467444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.467451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.467750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.467757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.468091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.468100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.468451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.468457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.468746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.468754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.469049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.469056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.469280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.469287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.469493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.469500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.469768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.469775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.470072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.470080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.470388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.470395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.470679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.470687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.470997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.471004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.471292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.471299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.217 [2024-11-06 10:25:57.471616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.217 [2024-11-06 10:25:57.471624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.217 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.471929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.471936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.472314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.472321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.472644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.472651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.472937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.472944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.473266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.473274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.473588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.473597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.473885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.473893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.474215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.474224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.474386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.474393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.474652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.474659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.474966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.474974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.475170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.475177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.475378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.475384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.475711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.475718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.476027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.476035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.476357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.476364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.476675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.476682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.476992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.476999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.477293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.477305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.477605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.477612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.477900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.477907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.478230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.478237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.478426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.478432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.478756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.478763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.479074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.479081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.479325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.479333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.479648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.479655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.479858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.479869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.480205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.480213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.480521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.480528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.480845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.480852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.481167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.481174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.481492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.481498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.481820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.481826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.482215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.482223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.482437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.482444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.482797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.482804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.218 [2024-11-06 10:25:57.482975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.218 [2024-11-06 10:25:57.482983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.218 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.483259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.483266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.483576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.483583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.483828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.483835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.484144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.484151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.484556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.484563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.484871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.484878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.485067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.485074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.485453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.485460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.485801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.485808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.486120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.486129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.486433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.486441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.486620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.486627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.486882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.486889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.487053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.487061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.487286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.487292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.487486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.487493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.487781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.487789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.488158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.488165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.488429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.488436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.488711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.488718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.489049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.489057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.489364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.489371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.489677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.489684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.490007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.490015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.490348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.490355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.490672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.490679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.490996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.491003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.491176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.491184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.491494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.491501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.491820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.491826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.492130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.492138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.492451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.492459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.492761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.492769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.493063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.493071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.493385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.493393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.493701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.493707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.494020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.494028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.219 [2024-11-06 10:25:57.494342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.219 [2024-11-06 10:25:57.494349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.219 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.494652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.494659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.494966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.494974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.495296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.495303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.495618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.495625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.495941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.495949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.496255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.496261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.496548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.496555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.496859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.496870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.497127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.497134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.497454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.497460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.497756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.497764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.497979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.497986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.498304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.498311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.498635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.498642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.498834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.498840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.499211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.499218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.499535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.499542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.499866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.499873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.500165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.500172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.500477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.500483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.500682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.500689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.500888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.500895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.501196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.501203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.501512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.501519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.501821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.501829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.502015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.502024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.502329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.502337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.502636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.502643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.502832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.502839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.503153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.503160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.503452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.503460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.503763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.503771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.504057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.504064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.504255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.504262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.504641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.504648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.504947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.504954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.505274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.505281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.505598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.220 [2024-11-06 10:25:57.505605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.220 qpair failed and we were unable to recover it. 00:33:54.220 [2024-11-06 10:25:57.505905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.505913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.506096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.506103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.506395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.506402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.506717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.506723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.507017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.507025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.507196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.507204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.507508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.507515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.507805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.507813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.508099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.508106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.508287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.508294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.508659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.508666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.509007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.509015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.509318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.509325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.509608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.509616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.509927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.509934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4098263 Killed "${NVMF_APP[@]}" "$@" 00:33:54.221 [2024-11-06 10:25:57.510245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.510254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.510443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.510451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.510716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.510722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.510885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.510893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:54.221 [2024-11-06 10:25:57.511235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.511243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:54.221 [2024-11-06 10:25:57.511538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.511546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:54.221 [2024-11-06 10:25:57.511868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.511877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.221 [2024-11-06 10:25:57.512151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.512159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.221 [2024-11-06 10:25:57.512470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.512478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.512766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.512776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.513065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.513072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.513440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.513448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.513745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.513753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.514122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.514134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.514431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.221 [2024-11-06 10:25:57.514439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.221 qpair failed and we were unable to recover it. 00:33:54.221 [2024-11-06 10:25:57.514747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.514755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.515055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.515063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.515430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.515438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.515771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.515779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.515984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.515992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.516152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.516161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.516479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.516487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.516790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.516798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.517109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.517116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.517299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.517307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.517588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.517596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.517907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.517915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.518225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.518232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.518544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.518551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.518747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.518755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.518932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.518940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.519149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.519157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.519346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.519354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4099296 00:33:54.222 [2024-11-06 10:25:57.519663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.519672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.519958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.519967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4099296 00:33:54.222 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:54.222 [2024-11-06 10:25:57.520317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.520327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 4099296 ']' 00:33:54.222 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.222 [2024-11-06 10:25:57.520677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.520686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:54.222 [2024-11-06 10:25:57.520890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.520900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.222 [2024-11-06 10:25:57.521219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.521228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:54.222 [2024-11-06 10:25:57.521499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.521508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 10:25:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.222 [2024-11-06 10:25:57.521660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.521672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.521961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.521970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.522289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.522297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.522606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.522614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.522946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.522957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.523276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.523284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.523472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.523480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.523638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.523648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.222 [2024-11-06 10:25:57.523815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.222 [2024-11-06 10:25:57.523824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.222 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.524120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.524128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.524420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.524429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.524757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.524765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.525054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.525062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.525166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.525175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.525441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.525449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.525769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.525778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.525974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.525983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.526188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.526196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.526482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.526490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.526783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.526791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.527072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.527081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.527350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.527358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.527661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.527669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.527979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.527987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.528315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.528323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.528601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.528609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.528829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.528837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.529159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.529168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.529482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.529490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.529787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.529795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.530112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.530121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.530445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.530458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.530728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.530736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.530945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.530954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.531328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.531336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.531420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.531428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.531722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.531730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.531914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.531923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.532294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.532302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.532562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.532570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.532952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.532960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.533304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.533311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.533518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.533525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.533702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.533709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.533983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.533992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.534276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.534283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.223 [2024-11-06 10:25:57.534494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.223 [2024-11-06 10:25:57.534502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.223 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.534700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.534707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.535015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.535023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.535340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.535347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.535508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.535516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.535813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.535820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.536117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.536124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.536441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.536448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.536772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.536779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.537080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.537087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.537402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.537410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.537807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.537814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.537975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.537983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.538268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.538275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.538458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.538464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.538770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.538777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.538990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.538997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.539221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.539228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.539547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.539554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.539831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.539838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.540166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.540173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.540492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.540500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.540690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.540697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.540951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.540958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.541310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.541317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.541629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.541637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.541932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.541940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.542213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.542220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.542526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.542533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.542741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.542748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.543060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.543068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.543281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.543288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.543578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.543585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.543961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.543969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.544262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.544270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.544591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.544599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.544908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.544916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.545246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.545253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.224 qpair failed and we were unable to recover it. 00:33:54.224 [2024-11-06 10:25:57.545577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.224 [2024-11-06 10:25:57.545585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.545897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.545905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.546219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.546227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.546521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.546528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.546818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.546825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.547131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.547138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.547179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.547187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.547480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.547487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.547847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.547854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.548032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.548040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.548356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.548363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.548631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.548639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.548947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.548954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.549133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.549141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.549359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.549366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.549652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.549660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.549945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.549953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.550224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.550232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.550637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.550644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.550940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.550948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.551276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.551283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.551583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.551591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.551904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.551911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.552209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.552216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.552545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.552552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.552867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.552875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.553163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.553170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.553467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.553476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.553850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.553857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.554174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.554182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.554423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.554431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.554649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.554656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.554867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.554876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.225 [2024-11-06 10:25:57.555160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.225 [2024-11-06 10:25:57.555167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.225 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.555369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.555376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.555659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.555666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.555995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.556002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.556310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.556318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.556650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.556658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.556829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.556837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.557065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.557073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.557288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.557297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.557594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.557601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.557793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.557800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.558034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.558041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.558358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.558365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.558644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.558651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.558950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.558957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.559246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.559253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.559571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.559578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.559889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.559896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.559981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.559987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.560302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.560309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.560638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.560646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.560928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.560935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.561280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.561287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.561589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.561596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.561885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.561893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.562218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.562224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.562520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.562527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.562697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.562705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.562928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.562935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.563210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.563217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.563528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.563535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.563718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.563726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.564048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.564055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.564089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.564096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.564394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.564403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.564729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.564736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.564923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.564930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.565288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.565295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.226 [2024-11-06 10:25:57.565587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.226 [2024-11-06 10:25:57.565594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.226 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.565804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.565811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.566090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.566097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.566437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.566445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.566718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.566725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.567039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.567046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.567215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.567223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.567546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.567553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.567869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.567876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.568239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.568247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.568437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.568444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.568725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.568732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.568996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.569004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.569308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.569315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.569610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.569618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.569929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.569936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.570308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.570315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.570654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.570661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.570972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.570979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.571191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.571198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.571574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.571580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.571913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.571920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.572143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.572150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.572468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.572475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.572808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.572815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.572989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.572996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.573103] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:54.227 [2024-11-06 10:25:57.573156] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.227 [2024-11-06 10:25:57.573309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.573318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.573643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.573649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.573825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.573831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.574007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.574015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.574466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.574473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.574757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.574765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.575050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.575059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.575389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.575397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.575709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.575717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.575919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.575927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.576255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.576264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.227 [2024-11-06 10:25:57.576475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.227 [2024-11-06 10:25:57.576483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.227 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.576761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.576769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.576934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.576942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.577221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.577229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.577557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.577564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.577885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.577894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.578237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.578245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.578555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.578563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.578681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.578688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.578982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.578990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.579069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.579076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.579339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.579349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.579530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.579538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.579807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.579815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.580141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.580150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.580370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.580378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.580548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.580555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.580868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.580876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.581178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.581185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.581511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.581518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.581841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.581848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.582149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.582157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.582468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.582476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.582803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.582812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.583014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.583022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.583302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.583310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.583645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.583652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.583961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.583969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.584257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.584265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.584639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.584647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.584929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.584937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.585131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.585138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.585314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.585322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.585592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.585600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.585953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.585961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.586377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.586385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.586550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.586558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.586724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.586732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.228 qpair failed and we were unable to recover it. 00:33:54.228 [2024-11-06 10:25:57.587059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.228 [2024-11-06 10:25:57.587067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.587403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.587411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.587730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.587738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.587912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.587920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.588214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.588222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.588551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.588558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.588875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.588884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.589076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.589084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.589391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.589399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.589810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.589818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.590133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.590140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.590459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.590466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.590641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.590649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.590968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.590977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.591290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.591297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.591595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.591602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.591911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.591918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.592221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.592228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.592582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.592589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.592801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.592808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.593157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.593165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.593498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.593505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.593818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.593825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.594135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.594144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.594426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.594432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.594752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.594759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.595070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.595077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.595382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.595389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.595687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.595695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.595788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.595795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.595998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.596007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.596196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.596203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.596551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.596559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.596874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.596882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.597174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.597181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.597372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.597379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.597709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.597716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.598006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.229 [2024-11-06 10:25:57.598014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.229 qpair failed and we were unable to recover it. 00:33:54.229 [2024-11-06 10:25:57.598070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.598078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.598511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.598518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.598892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.598900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.599221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.599229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.599546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.599553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.599917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.599925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.600219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.600227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.600558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.600565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.600770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.600777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.601112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.601119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.601462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.601469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.601624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.601630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.601807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.601813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.602000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.602008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.602222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.602230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.602413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.602421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.602720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.602728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.603044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.603051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.603260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.603267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.603448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.603455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.603757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.603764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.604148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.604156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.604362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.604369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.604683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.604690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.604915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.604923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.605112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.605119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.605415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.605422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.605754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.605761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.606064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.606071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.606367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.606374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.606693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.606701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.607008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.607015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.607328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.230 [2024-11-06 10:25:57.607335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.230 qpair failed and we were unable to recover it. 00:33:54.230 [2024-11-06 10:25:57.607621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.607628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.608016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.608023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.608316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.608323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.608545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.608551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.608856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.608866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.609046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.609054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.609353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.609360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.609614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.609621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.609951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.609959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.610265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.610272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.610588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.610595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.610892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.610900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.611209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.611216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.611516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.611522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.611838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.611844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.612051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.612059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.612374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.612381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.612706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.612713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.612903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.612910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.613314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.613320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.613618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.613625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.613989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.613998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.614330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.614339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.614634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.614641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.614937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.614944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.615271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.615277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.615439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.615446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.615773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.615780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.615889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.615896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.616093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.616100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.616399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.616408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.616690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.616697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.616915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.616922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.617208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.617215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.617534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.617541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.617881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.617888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.618088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.618095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.618367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.231 [2024-11-06 10:25:57.618374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.231 qpair failed and we were unable to recover it. 00:33:54.231 [2024-11-06 10:25:57.618688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.618694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.618989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.618996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.619313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.619319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.619518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.619525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.619624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.619630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.619847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.619854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.620178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.620185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.620481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.620489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.620677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.620684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.620995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.621002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.621174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.621181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.621520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.621529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.621719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.621727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.622029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.622036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.622225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.622232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.622410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.622417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.622585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.622593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.622912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.622920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.623235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.623242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.623549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.623557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.623625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.623633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.623914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.623922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.624131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.624138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.624327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.624334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.624621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.624628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.624690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.624698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.624894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.624905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.625213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.625220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.625412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.625420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.625711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.625719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.625980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.625987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.626315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.626323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.626714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.626721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.627022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.627030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.627215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.627222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.627408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.627415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.627604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.627611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.232 [2024-11-06 10:25:57.627945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.232 [2024-11-06 10:25:57.627953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.232 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.628238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.628245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.628624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.628631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.628933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.628941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.629116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.629123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.629389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.629396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.629713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.629720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.630035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.630043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.630367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.630374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.630681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.630688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.631009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.631016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.631213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.631220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.631526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.631533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.631741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.631748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.632074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.632083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.632379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.632386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.632679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.632686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.632995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.633003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.633310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.633317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.633630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.633637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.633935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.633942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.634243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.634251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.634575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.634582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.634870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.634877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.635266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.635273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.635578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.635585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.635723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.635730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb594000b90 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.635956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2014020 is same with the state(6) to be set 00:33:54.233 [2024-11-06 10:25:57.636514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.636553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.636934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.636949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.637377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.637415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.637622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.637634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.637852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.637870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.638261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.638300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.638600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.638619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.638670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.638681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.638980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.638991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.639298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.639309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.639397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.639410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.233 [2024-11-06 10:25:57.639709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.233 [2024-11-06 10:25:57.639720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.233 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.640035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.640045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.640336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.640346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.640659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.640669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.641020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.641031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.641218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.641229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.641563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.641572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.641865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.641877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.642166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.642177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.642485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.642496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.642823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.642833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.643047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.643058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.643391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.643400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.643613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.643622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.643927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.643937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.644256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.644267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.644495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.644505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.644825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.644834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.644993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.645006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.645295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.645305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.645604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.645614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.646011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.646021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.646295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.646306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.646602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.646612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.646939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.646949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.647269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.647279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.647584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.647594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.647968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.647978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.648286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.648296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.648568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.648578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.648859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.648882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.649187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.649197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.649495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.649506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.234 [2024-11-06 10:25:57.649875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.234 [2024-11-06 10:25:57.649886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.234 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.650163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.650173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.650498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.650508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.650838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.650848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.651062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.651074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.651233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.651244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.651539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.651550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.651857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.651874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.652159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.652169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.652471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.652481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.652807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.652819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.653124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.653134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.653433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.653443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.653755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.653766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.653844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.653854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.654030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.654042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.654356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.654366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.654674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.654685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.655016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.655027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.655339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.655349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.655512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.655523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.655736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.655746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.655927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.655937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.656275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.656285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.656599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.656611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.656893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.656903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.657245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.657255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.657593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.657603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.657886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.657896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.658191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.658201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.658517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.658527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.658715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.658726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.658920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.658933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.659206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.659216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.659538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.659548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.659760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.659770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.660162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.660172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.660462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.660477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.235 [2024-11-06 10:25:57.660770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.235 [2024-11-06 10:25:57.660780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.235 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.661028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.661038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.661377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.661388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.661711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.661721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.662022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.662033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.662361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.662371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.662656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.662666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.662967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.662978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.663291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.663300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.663582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.663592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.663899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.663910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.664212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.664222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.664507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.664517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.664695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.664705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.664885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.664896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.665208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.665218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.665560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.665570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.665735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.665747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.666030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.666041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.666383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.666393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.666681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.666691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.667032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.667043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.667209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.667221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.667577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.667588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.667868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.667878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.668099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.668109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.668434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.668444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.668643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.668654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.668966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.668977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.669272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.669282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.669609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.669619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.669896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.669906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.670198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.670208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.670507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.670517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.670722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.670732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.671036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.671046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.671370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.671380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.671685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.671695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.672013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.236 [2024-11-06 10:25:57.672024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.236 qpair failed and we were unable to recover it. 00:33:54.236 [2024-11-06 10:25:57.672227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.672237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.672450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.672462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.672630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.672640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.673024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.673035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.673408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.673417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.673625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.673634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.673933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.673943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.674250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.674259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.674537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.674546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.674837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.674847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.675162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.675172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.675500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.675509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.675693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.675703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.676008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.676019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.676318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.676327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.676375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:54.237 [2024-11-06 10:25:57.676720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.676729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.677020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.677030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.677442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.677451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.677763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.677773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.678068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.678078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.678371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.678380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.678658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.678668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.679004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.679014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.679326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.679336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.679654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.679664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.679979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.679989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.680307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.680316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.680655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.680664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.680967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.680977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.681293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.681302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.681587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.681596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.681923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.681934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.682242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.682252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.682313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.682321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.682603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.682613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.682899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.237 [2024-11-06 10:25:57.682909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.237 qpair failed and we were unable to recover it. 00:33:54.237 [2024-11-06 10:25:57.683233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.683243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.683538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.683548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.683844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.683853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.684168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.684178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.684462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.684472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.684848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.684860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.685166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.685176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.685519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.685530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.685845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.685855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.686131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.686141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.686309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.686320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.686708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.686718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.686992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.687002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.687336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.687346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.687626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.687642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.687837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.687848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.688030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.688041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.688426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.688436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.688743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.688752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.689042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.689052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.689365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.689375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.689686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.689696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.689885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.689895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.690074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.690084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.690418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.690428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.690815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.690826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.691140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.691151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.691471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.691482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.691808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.691818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.692106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.692116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.692432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.692442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.692817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.692826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.693062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.693072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.693404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.693413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.693728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.693739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.694056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.694066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.694215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.238 [2024-11-06 10:25:57.694225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.238 qpair failed and we were unable to recover it. 00:33:54.238 [2024-11-06 10:25:57.694618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.239 [2024-11-06 10:25:57.694628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.239 qpair failed and we were unable to recover it. 00:33:54.239 [2024-11-06 10:25:57.694918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.239 [2024-11-06 10:25:57.694929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.239 qpair failed and we were unable to recover it. 00:33:54.239 [2024-11-06 10:25:57.695255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.239 [2024-11-06 10:25:57.695265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.239 qpair failed and we were unable to recover it. 00:33:54.239 [2024-11-06 10:25:57.695590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.239 [2024-11-06 10:25:57.695600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.239 qpair failed and we were unable to recover it. 00:33:54.239 [2024-11-06 10:25:57.695908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.239 [2024-11-06 10:25:57.695918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.239 qpair failed and we were unable to recover it. 00:33:54.239 [2024-11-06 10:25:57.696205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.239 [2024-11-06 10:25:57.696216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.239 qpair failed and we were unable to recover it. 00:33:54.239 [2024-11-06 10:25:57.696387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.239 [2024-11-06 10:25:57.696398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.239 qpair failed and we were unable to recover it. 00:33:54.239 [2024-11-06 10:25:57.696727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.239 [2024-11-06 10:25:57.696738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.239 qpair failed and we were unable to recover it. 00:33:54.239 [2024-11-06 10:25:57.697031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.239 [2024-11-06 10:25:57.697041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.239 qpair failed and we were unable to recover it. 00:33:54.239 [2024-11-06 10:25:57.697376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.239 [2024-11-06 10:25:57.697389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.239 qpair failed and we were unable to recover it. 00:33:54.515 [2024-11-06 10:25:57.697702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.515 [2024-11-06 10:25:57.697714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.515 qpair failed and we were unable to recover it. 00:33:54.515 [2024-11-06 10:25:57.698013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.515 [2024-11-06 10:25:57.698023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.515 qpair failed and we were unable to recover it. 00:33:54.515 [2024-11-06 10:25:57.698360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.515 [2024-11-06 10:25:57.698370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.515 qpair failed and we were unable to recover it. 00:33:54.515 [2024-11-06 10:25:57.698683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.515 [2024-11-06 10:25:57.698692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.515 qpair failed and we were unable to recover it. 00:33:54.515 [2024-11-06 10:25:57.698996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.515 [2024-11-06 10:25:57.699007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.515 qpair failed and we were unable to recover it. 00:33:54.515 [2024-11-06 10:25:57.699388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.515 [2024-11-06 10:25:57.699398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.515 qpair failed and we were unable to recover it. 00:33:54.515 [2024-11-06 10:25:57.699701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.515 [2024-11-06 10:25:57.699711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.515 qpair failed and we were unable to recover it. 00:33:54.515 [2024-11-06 10:25:57.699879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.515 [2024-11-06 10:25:57.699890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.515 qpair failed and we were unable to recover it. 00:33:54.515 [2024-11-06 10:25:57.700189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.515 [2024-11-06 10:25:57.700199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.515 qpair failed and we were unable to recover it. 00:33:54.515 [2024-11-06 10:25:57.700552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.700562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.700852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.700869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.701189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.701199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.701512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.701521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.701836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.701845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.702043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.702054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.702386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.702396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.702569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.702579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.702897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.702907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.703217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.703227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.703525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.703535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.703713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.703723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.703983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.703994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.704300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.704309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.704609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.704619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.704803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.704814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.705128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.705139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.705430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.705443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.705779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.705788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.706030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.706040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.706465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.706475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.706784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.706794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.707090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.707100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.707287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.707298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.707475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.707485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.707821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.707831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.708121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.708131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.708445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.708455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.708756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.708766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.708967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.708977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.709299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.709309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.709675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.516 [2024-11-06 10:25:57.709686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.516 qpair failed and we were unable to recover it. 00:33:54.516 [2024-11-06 10:25:57.709975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.709987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.710306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.710316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.710659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.710668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.710943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.710953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.711132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.711142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.711351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.711361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.711636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.711646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.712026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.712037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.712228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.712240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.712555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.712565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.712691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.517 [2024-11-06 10:25:57.712719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.517 [2024-11-06 10:25:57.712726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.517 [2024-11-06 10:25:57.712733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.517 [2024-11-06 10:25:57.712739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.517 [2024-11-06 10:25:57.712860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.712876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.713174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.713184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.713375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.713386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.713709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.713720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.714031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.714041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.714381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.714392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.714324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:54.517 [2024-11-06 10:25:57.714442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:54.517 [2024-11-06 10:25:57.714569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:54.517 [2024-11-06 10:25:57.714571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:54.517 [2024-11-06 10:25:57.714708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.714718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.715136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.715146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.715443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.715453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.715783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.715793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.716184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.716194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.716490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.716500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.716673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.716683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.716994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.717004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.717326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.717336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.717659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.717669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.717982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.717992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.718214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.718224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.718547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.718557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.718843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.718853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.719186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.719197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.719487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.517 [2024-11-06 10:25:57.719497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.517 qpair failed and we were unable to recover it. 00:33:54.517 [2024-11-06 10:25:57.719837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.719847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.720176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.720186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.720423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.720434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.720642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.720651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.721011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.721028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.721363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.721373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.721690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.721700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.721999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.722009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.722214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.722224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.722392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.722403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.722613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.722623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.722977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.722989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.723166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.723176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.723494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.723504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.723832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.723842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.724152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.724162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.724527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.724537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.724732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.724741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.725037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.725048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.725392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.725402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.725614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.725624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.725866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.725877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.726193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.726203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.726410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.726419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.726740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.726750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.726952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.726962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.727297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.727307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.727612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.727622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.727913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.727923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.728256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.728266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.728599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.728609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.728910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.728922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.729101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.729111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.729414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.729424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.729478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.729488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.729839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.729849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.730030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.730041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.518 [2024-11-06 10:25:57.730344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.518 [2024-11-06 10:25:57.730354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.518 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.730562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.730571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.730773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.730784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.730967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.730977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.731251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.731262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.731567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.731577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.731911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.731921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.732103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.732114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.732404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.732415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.732604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.732614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.732793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.732803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.733029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.733040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.733398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.733408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.733749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.733759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.733938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.733950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.734340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.734350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.734398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.734408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.734689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.734701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.734993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.735012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.735249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.735260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.735446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.735456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.735686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.735697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.736037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.736047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.736266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.736277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.736507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.736516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.736854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.736868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.737150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.737161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.737437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.737448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.737775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.737785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.738090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.738101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.738177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.738187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.738460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.738470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.738799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.738809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.739143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.739153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.519 qpair failed and we were unable to recover it. 00:33:54.519 [2024-11-06 10:25:57.739540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.519 [2024-11-06 10:25:57.739551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.739733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.739743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.739980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.739991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.740199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.740211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.740384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.740394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.740559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.740570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.740858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.740871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.741173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.741183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.741237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.741246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.741532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.741542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.741828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.741838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.742145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.742155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.742565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.742575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.742872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.742882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.743217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.743226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.743397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.743406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.743689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.743699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.743752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.743761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.744085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.744094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.744236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.744245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.744543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.744553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.744870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.744880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.745190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.745199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.745507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.745517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.745878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.745889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.746227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.746237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.746541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.746551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.746913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.746924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.747205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.747217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.747380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.747391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.747623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.747633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.747975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.520 [2024-11-06 10:25:57.747985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.520 qpair failed and we were unable to recover it. 00:33:54.520 [2024-11-06 10:25:57.748299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.748309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.748472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.748485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.748812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.748822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.749146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.749157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.749452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.749463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.749606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.749616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.749801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.749812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.750137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.750149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.750456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.750467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.750815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.750826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.751175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.751185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.751244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.751253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.751570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.751580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.751927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.751937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.752139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.752150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.752358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.752369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.752529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.752543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.752811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.752821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.753001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.753012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.753308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.753318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.753499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.753509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.753813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.753823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.754018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.754031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.754261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.754274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.754460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.754471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.754783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.754793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.755202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.755212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.755422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.755432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.755801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.755811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.755991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.756001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.756366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.756376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.756684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.521 [2024-11-06 10:25:57.756694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.521 qpair failed and we were unable to recover it. 00:33:54.521 [2024-11-06 10:25:57.757011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.757022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.757209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.757219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.757540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.757550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.757858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.757888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.758250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.758260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.758463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.758473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.758693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.758705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.758891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.758902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.759217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.759227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.759398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.759407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.759826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.759836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.760142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.760152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.760410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.760420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.760729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.760738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.760910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.760920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.761285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.761295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.761460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.761470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.761777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.761786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.762095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.762108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.762396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.762406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.762698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.762708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.762991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.763002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.763318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.763328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.763538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.763549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.763716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.763728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.764110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.764121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.764430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.764440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.764774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.764784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.764964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.764975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.765290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.765300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.765586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.765596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.765883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.765893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.766226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.766236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.766403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.766413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.766754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.766763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.766950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.766960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.767265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.522 [2024-11-06 10:25:57.767274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.522 qpair failed and we were unable to recover it. 00:33:54.522 [2024-11-06 10:25:57.767608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.767618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.767826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.767836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.768125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.768135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.768419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.768429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.768748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.768758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.769131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.769141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.769425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.769435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.769645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.769655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.769981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.769991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.770363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.770373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.770573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.770583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.770782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.770792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.771123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.771133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.771501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.771511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.771690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.771700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.771903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.771912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.772213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.772222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.772567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.772577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.772789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.772799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.772981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.772990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.773169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.773179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.773485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.773495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.773801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.773813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.774112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.774122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.774563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.774574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.774783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.774792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.775087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.775097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.775407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.775416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.775599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.775609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.776010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.776020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.776350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.776360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.776654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.776664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.776847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.776856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.777193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.777203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.777521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.777532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.777831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.777841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.778213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.778224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.523 qpair failed and we were unable to recover it. 00:33:54.523 [2024-11-06 10:25:57.778393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.523 [2024-11-06 10:25:57.778403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.778732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.778742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.778940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.778950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.779275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.779285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.779589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.779599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.779772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.779782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.779972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.779982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.780371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.780381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.780555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.780564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.780710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.780719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.780995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.781005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.781197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.781207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.781434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.781445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.781495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.781505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.781732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.781742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.782093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.782103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.782439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.782448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.782762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.782772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.782970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.782981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.783197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.783206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.783565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.783575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.783762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.783771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.783989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.783999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.784455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.784465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.784589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.784598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.784866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.784876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.785276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.785287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.785585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.785595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.785909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.785919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.786223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.786233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.786567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.786576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.786894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.786910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.787035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.787045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.787387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.787397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.787683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.787692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.524 [2024-11-06 10:25:57.788024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.524 [2024-11-06 10:25:57.788034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.524 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.788344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.788354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.788658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.788668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.788968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.788978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.789309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.789321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.789645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.789655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.790000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.790011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.790197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.790207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.790434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.790444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.790753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.790764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.791087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.791098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.791301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.791311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.791656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.791667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.792014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.792024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.792165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.792174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.792471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.792481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.792830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.792839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.793200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.793210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.793406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.793416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.793602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.793613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.793998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.794008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.794190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.794200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.794446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.794457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.794759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.794768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.794829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.794838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.795156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.795166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.795602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.795613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.795906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.795916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.796236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.796246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.796444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.796453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.796791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.796801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.797191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.797201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.797279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.525 [2024-11-06 10:25:57.797289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.525 qpair failed and we were unable to recover it. 00:33:54.525 [2024-11-06 10:25:57.797579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.797589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.797781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.797792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.798095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.798105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.798419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.798430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.798757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.798767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.798817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.798826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.799171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.799182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.799351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.799362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.799651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.799662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.799978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.799988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.800068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.800077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.800232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.800241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.800443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.800453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.800775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.800786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.800970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.800980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.801211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.801220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.801446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.801457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.801786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.801795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.802096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.802106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.802413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.802425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.802589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.802599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.802793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.802804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.803165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.803176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.803496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.803505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.803687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.803697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.803894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.803905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.804229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.804239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.804284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.804292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.804604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.804613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.804913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.804923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.805262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.805272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.805656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.805666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.805868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.805878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.806055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.806064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.806375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.806385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.526 [2024-11-06 10:25:57.806608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.526 [2024-11-06 10:25:57.806618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.526 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.806818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.806827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.807045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.807056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.807396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.807406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.807730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.807741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.807916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.807928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.808278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.808288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.808470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.808480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.808762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.808773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.809014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.809024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.809373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.809383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.809677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.809695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.809877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.809887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.810066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.810077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.810463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.810474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.810776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.810786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.811135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.811145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.811469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.811481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.811641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.811651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.811959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.811969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.812169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.812179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.812509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.812519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.812718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.812728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.812892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.812902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.813048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.813057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.813284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.813294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.813457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.813467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.813673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.813683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.813962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.813972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.814293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.814303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.814652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.814662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.814843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.814855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.815157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.815168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.815565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.815576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.815769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.815779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.815983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.815993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.816324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.527 [2024-11-06 10:25:57.816333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.527 qpair failed and we were unable to recover it. 00:33:54.527 [2024-11-06 10:25:57.816684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.816694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.817020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.817030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.817227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.817237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.817540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.817550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.817896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.817907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.818206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.818216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.818417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.818427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.818625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.818634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.818957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.818967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.819283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.819293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.819633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.819642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.819934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.819944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.820261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.820271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.820453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.820462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.820643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.820653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.820842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.820852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.820935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.820946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.821170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.821179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.821384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.821394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.821723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.821733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.822038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.822048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.822385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.822397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.822592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.822602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.822953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.822964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.823342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.823352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.823691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.823702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.824025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.824035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.824349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.824358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.824650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.824660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.825012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.825022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.825216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.825225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.825454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.825465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.825663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.825673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.825922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.825932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.826255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.826265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.826600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.826610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.826935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.826945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.528 [2024-11-06 10:25:57.827137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.528 [2024-11-06 10:25:57.827147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.528 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.827475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.827485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.827850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.827860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.828206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.828216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.828456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.828466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.828808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.828818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.829019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.829029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.829411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.829421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.829647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.829656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.829966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.829976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.830353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.830362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.830710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.830720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.830905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.830915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.831240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.831251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.831490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.831500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.831662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.831671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.831961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.831972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.832309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.832319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.832617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.832627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.832974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.832984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.833410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.833420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.833763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.833773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.834079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.834090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.834274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.834284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.834637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.834647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.834831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.834845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.835018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.835028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.835214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.835223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.835582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.835592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.835885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.835895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.836065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.836075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.836245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.836255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.836504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.836515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.836568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.836578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.836644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.836653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.836949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.836959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.529 [2024-11-06 10:25:57.837304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.529 [2024-11-06 10:25:57.837315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.529 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.837370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.837379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.837581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.837591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.837918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.837928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.838257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.838266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.838637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.838646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.838819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.838829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.839201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.839212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.839564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.839573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.839918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.839928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.840243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.840252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.840574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.840585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.840674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.840684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.840732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.840741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.841036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.841046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.841353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.841363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.841656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.841668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.842015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.842026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.842238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.842248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.842459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.842468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.842630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.842639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.842943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.842954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.843138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.843148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.843324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.843333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.843631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.843640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.843988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.843998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.844313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.844323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.844654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.844664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.530 [2024-11-06 10:25:57.844995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.530 [2024-11-06 10:25:57.845006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.530 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.845294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.845304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.845499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.845509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.845832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.845842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.846163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.846173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.846475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.846484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.846761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.846770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.846957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.846967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.847255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.847265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.847601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.847611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.847911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.847922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.848215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.848225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.848545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.848555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.848739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.848749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.848936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.848947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.849143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.849156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.849438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.849448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.849789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.849799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.850072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.850082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.850269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.850280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.850490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.850500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.850711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.850722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.851047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.851057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.851236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.851246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.851604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.851614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.851812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.851822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.851990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.852000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.852195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.852205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.852391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.852400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.852603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.852613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.852782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.852791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.852994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.853005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.853192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.853201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.853517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.853526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.853814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.853824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.854175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.854185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.531 qpair failed and we were unable to recover it. 00:33:54.531 [2024-11-06 10:25:57.854527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.531 [2024-11-06 10:25:57.854536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.854737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.854747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.855032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.855042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.855243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.855253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.855455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.855466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.855685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.855695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.855910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.855921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.856141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.856151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.856466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.856476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.856811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.856820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.856993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.857003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.857313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.857322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.857611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.857620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.857936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.857946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.858278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.858288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.858482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.858492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.858711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.858722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.859042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.859052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.859382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.859392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.859727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.859737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.859934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.859945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.860276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.860287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.860476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.860486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.860547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.860558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.860872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.860883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.861198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.861208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.861536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.861546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.861728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.861738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.862073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.862084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.862417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.862427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.862727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.862737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.863081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.863091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.863380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.863390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.863721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.863731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.864036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.864046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.864389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.532 [2024-11-06 10:25:57.864400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.532 qpair failed and we were unable to recover it. 00:33:54.532 [2024-11-06 10:25:57.864445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.864454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.864626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.864637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.864975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.864985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.865165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.865176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.865476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.865486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.865689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.865700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.865879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.865889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.866179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.866189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.866519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.866529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.866848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.866858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.867046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.867057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.867417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.867429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.867593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.867603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.867911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.867921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.868328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.868338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.868451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.868461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.868755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.868765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.869064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.869074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.869397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.869408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.869573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.869583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.869767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.869778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.869991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.870001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.870234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.870243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.870423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.870433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.870775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.870786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.871051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.871061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.871108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.871117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.871273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.871283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.871614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.871625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.871809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.871819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.872224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.872235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.872408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.872418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.872698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.872709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.872898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.872908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.873230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.533 [2024-11-06 10:25:57.873240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.533 qpair failed and we were unable to recover it. 00:33:54.533 [2024-11-06 10:25:57.873546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.873556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.873870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.873881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.874032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.874042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.874094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.874106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.874406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.874416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.874582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.874593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.874791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.874801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.875094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.875104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.875463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.875473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.875853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.875866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.876215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.876225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.876424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.876434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.876749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.876758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.876934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.876946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.877248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.877259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.877548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.877558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.877873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.877883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.878213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.878224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.878620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.878629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.878929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.878939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.879300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.879309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.879603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.879612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.879814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.879826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.880026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.880036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.880256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.880266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.880611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.880621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.880806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.880816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.881150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.881160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.881478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.881488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.881674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.881684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.881995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.882005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.882203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.882213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.882446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.882457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.882684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.882694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.882998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.883008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.883299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.534 [2024-11-06 10:25:57.883310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.534 qpair failed and we were unable to recover it. 00:33:54.534 [2024-11-06 10:25:57.883598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.883608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.883764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.883774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.884011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.884021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.884206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.884216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.884401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.884410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.884727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.884737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.884783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.884793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.885076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.885088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.885445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.885455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.885839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.885848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.886214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.886224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.886271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.886281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.886575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.886585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.886883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.886893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.887218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.887228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.887581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.887591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.887915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.887926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.888276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.888286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.888577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.888586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.888756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.888765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.889110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.889120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.889485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.889495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.889711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.889721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.889882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.889893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.890334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.890344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.890688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.890697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.891005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.891015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.891326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.891335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.535 [2024-11-06 10:25:57.891505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.535 [2024-11-06 10:25:57.891515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.535 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.891800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.891809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.891858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.891870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.892164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.892174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.892532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.892542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.892835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.892845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.893174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.893184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.893492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.893505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.893709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.893718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.894089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.894100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.894245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.894254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.894590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.894599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.894794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.894805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.894989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.894999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.895365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.895375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.895561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.895570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.895864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.895875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.896194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.896204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.896493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.896503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.896783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.896792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.897106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.897116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.897286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.897297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.897476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.897485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.897535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.897545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.897846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.897855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.898180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.898190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.898533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.898543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.898857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.898871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.899156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.899166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.899490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.899501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.899811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.899821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.900009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.900020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.900217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.900227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.900266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.900275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.900643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.900654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.900970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.900980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.901272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.901292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.901457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.901467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.901784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.536 [2024-11-06 10:25:57.901793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.536 qpair failed and we were unable to recover it. 00:33:54.536 [2024-11-06 10:25:57.902105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.902115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.902445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.902455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.902751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.902760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.903085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.903095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.903376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.903386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.903743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.903753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.904103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.904114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.904430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.904440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.904491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.904500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.904761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.904771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.905077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.905087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.905373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.905382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.905527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.905538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.905780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.905790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.906135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.906145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.906368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.906377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.906691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.906701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.907036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.907045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.907316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.907326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.907613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.907623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.907793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.907803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.908129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.908139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.908349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.908361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.908646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.908656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.908874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.908884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.909215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.909224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.909521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.909531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.909825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.909834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.910127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.910137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.910422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.910432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.910766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.910776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.911098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.911108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.911303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.911313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.537 [2024-11-06 10:25:57.911651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.537 [2024-11-06 10:25:57.911660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.537 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.911817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.911828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.911996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.912007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.912426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.912436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.912624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.912634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.912909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.912919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.913221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.913231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.913515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.913525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.913696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.913705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.913889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.913899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.914285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.914295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.914624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.914633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.914924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.914934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.915244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.915254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.915539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.915548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.915680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.915690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.916088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.916099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.916296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.916306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.916615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.916625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.916819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.916830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.917132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.917142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.917311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.917323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.917511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.917520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.917796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.917807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.918127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.918138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.918449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.918459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.918748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.918758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.919121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.919131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.919482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.919492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.919839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.919848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.920116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.920126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.920302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.920311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.920499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.920509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.920814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.920823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.538 qpair failed and we were unable to recover it. 00:33:54.538 [2024-11-06 10:25:57.921138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.538 [2024-11-06 10:25:57.921148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.921478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.921488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.921804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.921813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.922125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.922135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.922290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.922300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.922629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.922638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.923040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.923050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.923380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.923390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.923714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.923723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.923905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.923915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.924202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.924212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.924502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.924512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.924798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.924808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.925104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.925114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.925314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.925324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.925641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.925650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.925837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.925847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.926032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.926042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.926326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.926335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.926665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.926675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.926978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.926988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.927274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.927283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.927554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.927564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.927897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.927909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.928198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.928207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.928365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.928375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.928662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.928671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.928858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.928875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.929034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.929044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.929343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.929354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.929668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.929678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.929971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.929981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.930188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.930198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.930526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.930536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.930774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.930783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.931165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.931176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.931480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.931490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.931818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.539 [2024-11-06 10:25:57.931828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.539 qpair failed and we were unable to recover it. 00:33:54.539 [2024-11-06 10:25:57.932121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.932131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.932306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.932316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.932369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.932379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.932564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.932573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.932748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.932757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.932943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.932954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.933006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.933017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.933318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.933328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.933592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.933602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.933925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.933936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.934063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.934072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.934269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.934279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.934604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.934616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.934945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.934955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.935175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.935184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.935369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.935379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.935701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.935712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.935834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.935843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.936232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.936242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.936558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.936568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.936888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.936898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.937218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.937228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.937553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.937563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.937879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.937890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.938011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.938020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.938244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.938254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.938600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.938610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.938809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.938819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.938860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.938874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.939212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.939222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.939384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.939395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.540 qpair failed and we were unable to recover it. 00:33:54.540 [2024-11-06 10:25:57.939707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.540 [2024-11-06 10:25:57.939717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.940033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.940043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.940224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.940234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.940593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.940603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.940907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.940917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.941228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.941238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.941430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.941440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.941755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.941765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.942085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.942095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.942406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.942416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.942706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.942716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.943029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.943040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.943212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.943221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.943530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.943541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.943854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.943868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.944061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.944071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.944379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.944389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.944568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.944580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.944958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.944969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.945286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.945296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.945463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.945473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.945759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.945769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.946083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.946093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.946377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.946387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.946726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.946736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.946937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.946947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.947259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.947269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.947594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.947605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.947796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.947806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.948134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.948144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.948467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.948477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.948800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.948810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.949153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.949163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.949496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.949506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.949787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.949796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.950008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.950018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.541 [2024-11-06 10:25:57.950368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.541 [2024-11-06 10:25:57.950378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.541 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.950592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.950603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.950794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.950804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.951181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.951191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.951356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.951365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.951691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.951702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.952016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.952026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.952346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.952356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.952523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.952533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.952835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.952844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.953034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.953044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.953332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.953343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.953660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.953670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.953984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.953997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.954302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.954312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.954619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.954630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.954930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.954940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.955277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.955287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.955464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.955475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.955710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.955720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.956064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.956074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.956382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.956392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.956733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.956743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.957046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.957057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.957395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.957405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.957450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.957459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.957799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.957809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.958125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.958135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.958446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.958455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.958793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.958803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.959158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.959168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.959486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.959496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.959542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.959552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.959868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.959878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.960053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.960062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.960250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.960260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.960591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.960600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.960887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.960897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.961213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.542 [2024-11-06 10:25:57.961223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.542 qpair failed and we were unable to recover it. 00:33:54.542 [2024-11-06 10:25:57.961549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.961559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.961728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.961740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.961977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.961987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.962154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.962163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.962407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.962417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.962798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.962808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.963018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.963027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.963139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.963149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.963465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.963475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.963775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.963784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.963981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.963991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.964287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.964298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.964351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.964361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.964414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.964424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.964663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.964674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.964985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.964995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.965337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.965346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.965547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.965556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.965766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.965776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.965977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.965988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.966161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.966171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.966505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.966515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.966886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.966896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.967241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.967250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.967448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.967457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.967680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.967690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.967970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.967981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.968173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.968182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.968355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.968368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.968671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.968682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.968877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.968888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.969192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.969201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.969372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.969382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.969554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.969564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.969902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.543 [2024-11-06 10:25:57.969913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.543 qpair failed and we were unable to recover it. 00:33:54.543 [2024-11-06 10:25:57.970143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.970152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.970495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.970505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.970547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.970556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.970873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.970884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.971186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.971198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.971541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.971551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.971749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.971759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.972013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.972024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.972367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.972377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.972555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.972566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.972917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.972928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.973296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.973306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.973593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.973603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.973920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.973931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.974258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.974268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.974555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.974565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.974896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.974906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.975162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.975171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.975508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.975519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.975690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.975699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.976105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.976115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.976434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.976444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.976721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.976731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.976934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.976945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.977202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.977211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.977538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.977548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.977710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.977723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.977965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.977979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.978345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.978356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.978528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.978539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.978731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.978740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.979073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.979083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.979377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.979387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.979718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.544 [2024-11-06 10:25:57.979728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.544 qpair failed and we were unable to recover it. 00:33:54.544 [2024-11-06 10:25:57.980037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.980048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.980225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.980236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.980448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.980459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.980792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.980802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.981119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.981130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.981310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.981321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.981495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.981505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.981793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.981803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.982121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.982131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.982353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.982362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.982615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.982624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.982966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.982976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.983159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.983168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.983519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.983529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.983848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.983858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.984043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.984053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.984381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.984390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.984556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.984567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.984886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.984896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.985071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.985080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.985267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.985277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.985457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.985466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.985517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.985528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.985847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.985857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.986232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.986242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.986558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.986568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.986761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.545 [2024-11-06 10:25:57.986770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.545 qpair failed and we were unable to recover it. 00:33:54.545 [2024-11-06 10:25:57.987048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.987061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.987381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.987390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.987731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.987740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.988111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.988121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.988318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.988329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.988503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.988513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.988803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.988812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.989010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.989020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.989125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.989134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.989329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.989338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.989639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.989648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.989977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.989987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.990154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.990165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.990507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.990517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.990818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.990828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.991141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.991151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.991320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.991330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.991647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.991657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.991850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.991861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.992192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.992202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.992546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.992556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.992775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.992785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.992958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.992967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.993261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.993271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.993612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.993622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.993932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.993942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.994251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.994260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.994430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.994442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.994635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.994646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.994987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.994998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.995192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.995202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.995539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.995549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.995749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.995760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.995959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.995969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.996274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.996283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.996577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.996586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.546 [2024-11-06 10:25:57.996764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.546 [2024-11-06 10:25:57.996775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.546 qpair failed and we were unable to recover it. 00:33:54.547 [2024-11-06 10:25:57.997103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.547 [2024-11-06 10:25:57.997113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.547 qpair failed and we were unable to recover it. 00:33:54.547 [2024-11-06 10:25:57.997476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.547 [2024-11-06 10:25:57.997485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.547 qpair failed and we were unable to recover it. 00:33:54.547 [2024-11-06 10:25:57.997806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.547 [2024-11-06 10:25:57.997815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.547 qpair failed and we were unable to recover it. 00:33:54.547 [2024-11-06 10:25:57.998120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.547 [2024-11-06 10:25:57.998130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.547 qpair failed and we were unable to recover it. 00:33:54.547 [2024-11-06 10:25:57.998418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.547 [2024-11-06 10:25:57.998427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.547 qpair failed and we were unable to recover it. 00:33:54.547 [2024-11-06 10:25:57.998762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.547 [2024-11-06 10:25:57.998772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.547 qpair failed and we were unable to recover it. 00:33:54.547 [2024-11-06 10:25:57.999077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.547 [2024-11-06 10:25:57.999088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.547 qpair failed and we were unable to recover it. 00:33:54.547 [2024-11-06 10:25:57.999394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.547 [2024-11-06 10:25:57.999404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.547 qpair failed and we were unable to recover it. 00:33:54.547 [2024-11-06 10:25:57.999693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.547 [2024-11-06 10:25:57.999702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.547 qpair failed and we were unable to recover it. 00:33:54.547 [2024-11-06 10:25:57.999899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.547 [2024-11-06 10:25:57.999909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.547 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.000088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.000098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.000366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.000378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.000717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.000727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.001040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.001051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.001347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.001357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.001532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.001542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.001821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.001830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.002162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.002172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.002500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.002509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.002796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.002806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.003111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.003121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.003286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.003295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.003574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.003584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.003927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.003937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.004256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.004265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.004443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.004453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.004776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.004785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.005113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.005122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.005443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.005453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.005625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.005635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.005821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.005831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.006190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.006200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.006507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.006517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.006674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.006683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.006876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.006887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.007190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.007199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.007514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.007524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.007854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.007866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.008174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.008184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.008370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.008381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.008539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.008549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.008737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.008748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.009063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.009074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.009386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.009396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.825 [2024-11-06 10:25:58.009731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.825 [2024-11-06 10:25:58.009742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.825 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.010070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.010081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.010254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.010265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.010577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.010588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.010906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.010916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.011198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.011208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.011254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.011264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.011551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.011561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.011743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.011754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.012035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.012055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.012357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.012367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.012408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.012416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.012711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.012720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.013031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.013041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.013346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.013358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.013547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.013558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.013882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.013892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.014244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.014254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.014470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.014480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.014639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.014649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.014879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.014889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.015068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.015080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.015386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.015397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.015736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.015746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.016031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.016041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.016342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.016351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.016665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.016674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.016859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.016874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.017189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.017199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.017485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.017494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.017812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.017821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.017991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.018001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.018278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.018287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.018592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.018604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.018788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.018798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.019156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.019166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.019480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.019490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.019802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.019812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.019966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.826 [2024-11-06 10:25:58.019977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.826 qpair failed and we were unable to recover it. 00:33:54.826 [2024-11-06 10:25:58.020276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.020286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.020499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.020509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.020694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.020708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.021091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.021102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.021303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.021314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.021512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.021522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.021903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.021912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.022235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.022245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.022580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.022590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.022903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.022913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.023222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.023232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.023436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.023445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.023658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.023668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.023840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.023850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.024050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.024060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.024388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.024397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.024592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.024602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.024791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.024800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.025161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.025170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.025390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.025399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.025760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.025770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.026094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.026104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.026287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.026296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.026346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.026358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.026559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.026570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.026866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.026876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.026931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.026941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.027118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.027127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.027500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.027510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.027839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.027851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.028169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.028180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.028373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.028383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.028723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.028733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.028909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.028919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.029236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.029246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.029441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.029451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.029677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.029687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.029995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.030005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.827 [2024-11-06 10:25:58.030182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.827 [2024-11-06 10:25:58.030192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.827 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.030571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.030581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.030750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.030760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.031049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.031060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.031262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.031272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.031571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.031581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.031932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.031942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.032158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.032168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.032480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.032490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.032685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.032695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.032988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.032999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.033251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.033261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.033588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.033597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.033792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.033802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.034063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.034074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.034496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.034505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.034716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.034725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.034900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.034910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.035271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.035281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.035576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.035585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.035788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.035798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.035981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.035992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.036347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.036356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.036653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.036671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.036897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.036907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.037244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.037254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.037594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.037603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.037920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.037930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.038306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.038316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.038465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.038475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.038759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.038769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.039106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.039117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.039284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.039295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.039483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.039494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.039808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.039818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.040101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.040111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.040440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.040450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.040642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.040652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.040701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.828 [2024-11-06 10:25:58.040712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.828 qpair failed and we were unable to recover it. 00:33:54.828 [2024-11-06 10:25:58.041037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.041048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.041391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.041401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.041737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.041748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.041958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.041969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.042271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.042281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.042587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.042598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.042683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.042693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.042972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.042983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.043238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.043248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.043534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.043544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.043730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.043742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.044078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.044089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.044464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.044473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.044644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.044653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.044785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.044795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.045088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.045100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.045448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.045458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.045773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.045783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.045990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.046000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.046241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.046250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.046597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.046610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.046809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.046819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.047185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.047195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.047526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.047536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.047700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.047709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.048033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.048043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.048225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.048237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.048433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.048443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.048640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.048649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.048961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.048971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.049276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.049286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.049480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.049490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.829 qpair failed and we were unable to recover it. 00:33:54.829 [2024-11-06 10:25:58.049776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.829 [2024-11-06 10:25:58.049786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.049953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.049964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.050234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.050245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.050533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.050542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.050852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.050866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.051212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.051223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.051534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.051543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.051838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.051848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.052145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.052155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.052474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.052484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.052764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.052773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.052982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.052992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.053184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.053193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.053366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.053376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.053709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.053719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.053895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.053908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.054086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.054096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.054385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.054395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.054736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.054746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.055061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.055070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.055409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.055419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.055789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.055799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.056160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.056170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.056504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.056514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.056843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.056852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.057125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.057135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.057324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.057334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.057376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.057385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.057687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.057697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.058091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.058101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.058398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.058408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.058618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.058628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.058964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.058974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.059266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.059276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.059466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.059475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.059741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.059751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.060088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.060098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.060505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.060515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.060829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.830 [2024-11-06 10:25:58.060839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.830 qpair failed and we were unable to recover it. 00:33:54.830 [2024-11-06 10:25:58.061197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.061207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.061507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.061517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.061839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.061848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.062171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.062181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.062555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.062564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.062889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.062899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.063090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.063101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.063422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.063434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.063602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.063612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.063797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.063806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.064151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.064162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.064478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.064487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.064808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.064818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.065025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.065035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.065371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.065381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.065557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.065568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.065849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.065859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.066075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.066086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.066417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.066427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.066606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.066616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.066926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.066937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.067255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.067265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.067642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.067651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.067826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.067835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.068140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.068150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.068196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.068205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.068511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.068521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.068826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.068836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.069044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.069054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.069373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.069382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.069650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.069660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.069852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.069865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.070182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.070191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.070518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.070528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.070842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.070852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.071042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.071053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.071227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.071237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.071435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.071445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.831 [2024-11-06 10:25:58.071641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.831 [2024-11-06 10:25:58.071651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.831 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.071993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.072003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.072224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.072234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.072422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.072432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.072506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.072515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.072824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.072833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.073051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.073064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.073381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.073391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.073586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.073596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.073940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.073950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.074277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.074287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.074586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.074596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.074912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.074922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.075118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.075127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.075407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.075417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.075726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.075735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.076027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.076038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.076353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.076363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.076686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.076696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.076877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.076888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.077062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.077071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.077399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.077409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.077730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.077739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.077919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.077930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.078135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.078144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.078344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.078355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.078560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.078569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.078871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.078881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.079047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.079057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.079286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.079296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.079504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.079513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.079937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.079947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.080268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.080278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.080605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.080617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.080930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.080940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.081040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.081049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.081398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.081408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.081747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.081756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.081957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.081968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.832 [2024-11-06 10:25:58.082337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.832 [2024-11-06 10:25:58.082346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.832 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.082526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.082536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.082722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.082732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.083139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.083149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.083528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.083537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.083584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.083593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.083888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.083899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.084216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.084226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.084411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.084420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.084722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.084732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.085138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.085148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.085416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.085426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.085756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.085766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.086073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.086084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.086306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.086316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.086485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.086494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.086687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.086696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.086866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.086877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.087240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.087250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.087416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.087426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.087635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.087645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.087968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.087981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.088189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.088198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.088492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.088502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.088847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.088857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.089048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.089058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.089413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.089423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.089782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.089792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.089989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.090001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.090285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.090295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.090624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.090633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.090982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.090992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.091313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.091322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.091523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.091532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.091761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.091771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.092081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.092091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.092295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.092305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.092500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.092511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.092563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.092572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.833 qpair failed and we were unable to recover it. 00:33:54.833 [2024-11-06 10:25:58.092896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.833 [2024-11-06 10:25:58.092906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.093225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.093235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.093607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.093617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.093806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.093816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.093980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.093989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.094185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.094194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.094374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.094385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.094649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.094659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.094961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.094971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.095156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.095165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.095343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.095353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.095672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.095682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.095873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.095883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.096191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.096201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.096405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.096416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.096721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.096730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.097034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.097044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.097092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.097103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.097401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.097411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.097621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.097631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.097940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.097949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.098257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.098279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.098457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.098467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.098759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.098770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.099084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.099094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.099298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.099309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.099568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.099578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.099891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.099902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.100241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.100250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.100437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.100446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.100715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.100725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.101074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.101084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.101432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.101442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.101622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.834 [2024-11-06 10:25:58.101632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.834 qpair failed and we were unable to recover it. 00:33:54.834 [2024-11-06 10:25:58.101976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.101986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.102332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.102341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.102646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.102655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.102976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.102986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.103329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.103338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.103636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.103646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.103990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.104000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.104295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.104305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.104532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.104542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.104858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.104871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.105048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.105058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.105229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.105238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.105405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.105414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.105592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.105603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.105922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.105933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.106102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.106112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.106298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.106310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.106663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.106672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.106957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.106967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.107223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.107233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.107562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.107571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.107922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.107933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.108231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.108240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.108404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.108413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.108751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.108761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.109112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.109122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.109407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.109417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.109756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.109766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.109820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.109830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.110166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.110176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.110557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.110567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.110736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.110746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.111089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.111099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.111438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.111447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.111710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.111720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.111882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.111892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.112288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.112297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.112642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.112652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.112937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.835 [2024-11-06 10:25:58.112946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.835 qpair failed and we were unable to recover it. 00:33:54.835 [2024-11-06 10:25:58.113276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.113286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.113627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.113636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.113955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.113965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.114275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.114285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.114579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.114593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.114894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.114904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.115216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.115226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.115543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.115553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.115911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.115921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.116246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.116256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.116525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.116534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.116809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.116818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.117096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.117106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.117440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.117450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.117760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.117770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.118081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.118091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.118289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.118300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.118455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.118464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.118632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.118641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.118979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.118989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.119305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.119323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.119504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.119514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.119788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.119797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.119841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.119851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.120132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.120143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.120452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.120462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.120753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.120762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.121063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.121073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.121443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.121453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.121744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.121754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.121926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.121937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.122142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.122153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.122454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.122464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.122658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.122668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.122833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.122843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.123176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.123187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.123373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.123383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.123658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.123667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.836 [2024-11-06 10:25:58.123995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.836 [2024-11-06 10:25:58.124005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.836 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.124195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.124206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.124411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.124420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.124574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.124583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.124886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.124896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.125197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.125206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.125515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.125525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.125695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.125705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.125872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.125882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.126168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.126178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.126492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.126501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.126786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.126795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.127098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.127107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.127287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.127296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.127614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.127623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.127800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.127809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.128143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.128153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.128314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.128324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.128514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.128524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.128679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.128689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.129025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.129035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.129361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.129370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.129690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.129699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.130009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.130019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.130323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.130333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.130525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.130535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.130882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.130892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.131211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.131221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.131338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.131348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.131546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.131555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.131875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.131885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.132182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.132191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.132362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.132371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.132701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.132711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.132888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.132900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.133137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.133146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.133333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.133342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.133731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.133740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.133960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.133970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.837 [2024-11-06 10:25:58.134307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.837 [2024-11-06 10:25:58.134316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.837 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.134649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.134659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.134846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.134855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.135171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.135181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.135463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.135472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.135795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.135804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.135975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.135985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.136276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.136286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.136673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.136682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.137000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.137010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.137171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.137181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.137468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.137478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.137663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.137673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.137972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.137982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.138317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.138327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.138644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.138653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.138951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.138961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.139253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.139263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.139600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.139609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.139799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.139809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.140136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.140146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.140355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.140364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.140683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.140695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.141013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.141023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.141364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.141373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.141665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.141674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.141836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.141845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.142165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.142175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.142360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.142369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.142696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.142705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.143011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.143021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.143349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.143358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.143668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.143677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.143866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.143876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.144091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.838 [2024-11-06 10:25:58.144101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.838 qpair failed and we were unable to recover it. 00:33:54.838 [2024-11-06 10:25:58.144284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.144294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.144659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.144669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.144853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.144872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.145210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.145219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.145560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.145569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.145740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.145750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.146103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.146113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.146286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.146296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.146462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.146472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.146788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.146797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.147125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.147135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.147471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.147481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.147792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.147802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.148136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.148146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.148437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.148449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.148766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.148776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.149108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.149119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.149286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.149296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.149591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.149601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.149771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.149780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.150216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.150226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.150535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.150545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.150733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.150743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.151001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.151010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.151305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.151314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.151479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.151488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.151858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.151874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.152177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.152187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.152490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.152500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.152850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.152860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.153216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.153227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.153372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.153382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.153554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.153563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.153760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.153770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.154089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.154100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.154412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.154421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.154775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.154784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.154946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.154956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.839 qpair failed and we were unable to recover it. 00:33:54.839 [2024-11-06 10:25:58.155323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.839 [2024-11-06 10:25:58.155333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.155522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.155539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.155855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.155867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.156170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.156180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.156377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.156387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.156728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.156739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.157074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.157084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.157374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.157383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.157670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.157679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.157840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.157849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.158189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.158199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.158389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.158399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.158611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.158622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.158788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.158798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.159065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.159076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.159384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.159394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.159597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.159607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.159943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.159954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.160272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.160281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.160476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.160495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.160836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.160846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.161189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.161199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.161497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.161507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.161718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.161728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.161972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.161982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.162331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.162340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.162684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.162694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.163033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.163044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.163332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.163343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.163635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.163645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.163876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.163887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.164206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.164217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.164509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.164520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.164852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.164866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.165065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.165074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.165449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.165458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.165768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.165778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.166151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.166161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.166329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.840 [2024-11-06 10:25:58.166340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.840 qpair failed and we were unable to recover it. 00:33:54.840 [2024-11-06 10:25:58.166632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.166642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.166831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.166842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.167158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.167169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.167496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.167507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.167667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.167677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.167998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.168012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.168404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.168414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.168785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.168795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.169124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.169134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.169344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.169354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.169550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.169560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.169762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.169772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.170072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.170081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.170414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.170423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.170711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.170721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.170775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.170784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.170949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.170959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.171309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.171319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.171520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.171530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.171746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.171756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.172139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.172149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.172440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.172449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.172753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.172762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.172939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.172949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.173341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.173351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.173672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.173682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.173877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.173887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.174231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.174240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.174527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.174537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.174823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.174832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.175052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.175061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.175389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.175398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.175789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.175800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.176107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.176118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.176427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.176437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.176658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.176667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.176986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.176996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.177286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.177296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.177600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.841 [2024-11-06 10:25:58.177610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.841 qpair failed and we were unable to recover it. 00:33:54.841 [2024-11-06 10:25:58.177941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.177951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.178112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.178122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.178332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.178342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.178675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.178685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.178973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.178983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.179333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.179342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.179555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.179565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.179782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.179791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.180094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.180104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.180418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.180428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.180718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.180727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.181015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.181025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.181320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.181329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.181667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.181676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.181983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.181992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.182292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.182302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.182481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.182490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.182848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.182858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.183042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.183052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.183360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.183370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.183586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.183596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.183934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.183944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.184335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.184345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.184558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.184568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.184969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.184979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.185133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.185142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.185433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.185442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.185780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.185790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.185958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.185968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.186199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.186209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.186570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.186580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.186767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.186777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.187115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.187125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.187439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.187448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.187621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.187632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.187977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.187987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.188308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.188317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.188599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.188609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.842 [2024-11-06 10:25:58.188888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.842 [2024-11-06 10:25:58.188897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.842 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.189238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.189247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.189574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.189583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.189881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.189891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.190260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.190269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.190543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.190553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.190752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.190763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.191056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.191066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.191250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.191259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.191589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.191599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.191799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.191809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.192072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.192082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.192388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.192398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.192691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.192700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.192898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.192908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.193205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.193214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.193259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.193267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.193428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.193437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.193609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.193619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.193924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.193934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.194316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.194325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.194670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.194679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.194727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.194737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.194940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.194952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.195096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.195105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.195497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.195506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.195798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.195807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.196157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.196168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.196432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.196442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.196568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.196577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.196774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.196783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.197095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.197105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.197396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.197405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.197737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.197747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.197916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.843 [2024-11-06 10:25:58.197926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.843 qpair failed and we were unable to recover it. 00:33:54.843 [2024-11-06 10:25:58.198271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.198281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.198568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.198578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.198743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.198753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.198907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.198916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.199097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.199107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.199274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.199285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.199614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.199623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.199931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.199941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.200263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.200272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.200561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.200570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.200937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.200947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.201281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.201291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.201593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.201602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.201893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.201903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.202249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.202259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.202583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.202596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.202897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.202907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.203234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.203244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.203533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.203543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.203718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.203728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.203906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.203916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.204303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.204313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.204497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.204508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.204817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.204827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.205015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.205026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.205218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.205228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.205574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.205584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.205905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.205915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.206229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.206239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.206518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.206528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.206850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.206860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.207066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.207076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.207290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.207301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.207620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.207630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.207972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.207982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.208320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.208330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.208620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.208630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.208953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.208964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.209313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.844 [2024-11-06 10:25:58.209323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.844 qpair failed and we were unable to recover it. 00:33:54.844 [2024-11-06 10:25:58.209627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.209637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.209924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.209935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.210255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.210265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.210620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.210633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.210971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.210981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.211150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.211161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.211338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.211347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.211638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.211649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.211932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.211943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.212116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.212127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.212460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.212470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.212866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.212877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.213068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.213078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.213411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.213421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.213763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.213773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.213940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.213952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.214275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.214285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.214582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.214592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.214933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.214943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.215294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.215304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.215579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.215588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.215764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.215774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.216030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.216041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.216353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.216364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.216654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.216663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.216969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.216980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.217301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.217311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.217579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.217589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.217751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.217760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.217942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.217952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.218313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.218322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.218631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.218641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.218836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.218846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.219065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.219075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.219410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.219420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.219759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.219769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.220078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.220088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.220249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.220259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.845 qpair failed and we were unable to recover it. 00:33:54.845 [2024-11-06 10:25:58.220423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.845 [2024-11-06 10:25:58.220432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.220725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.220735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.221116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.221126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.221446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.221456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.221743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.221753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.222105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.222116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.222460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.222470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.222772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.222782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.223065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.223076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.223223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.223233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.223626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.223636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.223950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.223960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.224253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.224262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.224552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.224562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.224879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.224889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.225262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.225271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.225585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.225595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.225639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.225648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.225837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.225847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.226072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.226082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.226403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.226413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.226592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.226602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.226794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.226804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.227129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.227139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.227439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.227448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.227763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.227773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.228065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.228076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.228402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.228412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.228594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.228604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.228838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.228848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.229028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.229039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.229394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.229403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.229591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.229600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.229649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.229663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.230011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.230021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.230214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.230225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.230420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.230431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.230724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.230734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.230945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.846 [2024-11-06 10:25:58.230955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.846 qpair failed and we were unable to recover it. 00:33:54.846 [2024-11-06 10:25:58.231190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.231200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.231352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.231362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.231734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.231744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.231913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.231923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.232260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.232270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.232443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.232454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.232760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.232770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.232947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.232957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.233255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.233265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.233447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.233457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.233644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.233654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.233831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.233840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.234113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.234123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.234491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.234501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.234796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.234806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.235110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.235120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.235314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.235325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.235490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.235500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.235731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.235741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.236004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.236014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.236349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.236358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.236685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.236697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.236986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.236996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.237325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.237334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.237514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.237524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.237844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.237853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.238167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.238177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.238511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.238520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.238810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.238819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.239142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.239152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.239320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.239329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.239526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.239537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.239706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.239717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.240061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.847 [2024-11-06 10:25:58.240071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.847 qpair failed and we were unable to recover it. 00:33:54.847 [2024-11-06 10:25:58.240407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.240417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.240717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.240727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.240896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.240906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.241195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.241205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.241383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.241393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.241680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.241690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.241988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.241998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.242283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.242293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.242462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.242471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.242849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.242859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.243233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.243243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.243397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.243406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.243814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.243824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.244027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.244038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.244231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.244240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.244400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.244409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.244455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.244465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.244643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.244653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.244979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.244989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.245031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.245040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.245269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.245279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.245519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.245528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.245821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.245830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.246138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.246148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.246341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.246350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.246625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.246635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.246973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.246983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.247300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.247311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.247656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.247666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.247981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.247992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.248312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.248322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.248615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.248624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.248921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.248932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.249294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.249304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.249667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.249676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.249840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.249850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.250212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.250223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.250536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.848 [2024-11-06 10:25:58.250546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.848 qpair failed and we were unable to recover it. 00:33:54.848 [2024-11-06 10:25:58.250838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.250847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.251147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.251157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.251338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.251348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.251397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.251406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.251619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.251628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.251827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.251837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.252213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.252223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.252390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.252400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.252733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.252742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.253084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.253094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.253383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.253393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.253682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.253692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.253906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.253916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.254247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.254257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.254583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.254593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.254915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.254926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.255237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.255247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.255536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.255548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.255871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.255881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.256248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.256257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.256428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.256438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.256776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.256786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.257065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.257075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.257405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.257415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.257575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.257585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.257755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.257764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.258010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.258020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.258181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.258190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.258508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.258517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.258858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.258870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.259245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.259254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.259539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.259549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.259741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.259750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.260078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.260089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.260419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.260429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.260718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.260728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.261047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.261058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.261377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.261387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.261711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.849 [2024-11-06 10:25:58.261720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.849 qpair failed and we were unable to recover it. 00:33:54.849 [2024-11-06 10:25:58.262039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.262049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.262373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.262383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.262673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.262682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.263003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.263013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.263209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.263219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.263532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.263544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.263703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.263713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.263893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.263903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.264185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.264195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.264524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.264534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.264842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.264852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.265220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.265230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.265406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.265415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.265738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.265747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.266030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.266040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.266206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.266216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.266618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.266627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.266914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.266924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.267245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.267255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.267545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.267555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.267743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.267753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.267910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.267921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.268227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.268236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.268561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.268571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.268865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.268874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.269168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.269177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.269350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.269360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.269585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.269594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.269786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.269796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.269982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.269993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.270224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.270235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.270581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.270591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.270916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.270928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.271283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.271293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.271600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.271609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.271912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.271922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.272238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.272248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.272571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.272581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.850 [2024-11-06 10:25:58.272893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.850 [2024-11-06 10:25:58.272903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.850 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.273082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.273092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.273390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.273400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.273698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.273707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.274079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.274089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.274408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.274418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.274506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.274515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.274743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.274754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.275115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.275125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.275416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.275426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.275748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.275758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.276061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.276072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.276443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.276453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.276789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.276799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.277114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.277125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.277293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.277302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.277642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.277652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.277949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.277959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.278279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.278288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.278605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.278614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.279023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.279033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.279084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.279095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.279389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.279399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.279724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.279733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.280033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.280044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.280348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.280359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.280694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.280703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.281030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.281040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.281312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.281321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.281592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.281602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.281929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.281940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.282289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.282299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.282615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.282624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.282789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.282798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.283074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.283084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.283412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.283424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.283743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.283753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.284048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.284059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.284226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.284236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.851 qpair failed and we were unable to recover it. 00:33:54.851 [2024-11-06 10:25:58.284400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.851 [2024-11-06 10:25:58.284411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.284745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.284755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.285074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.285085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.285419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.285430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.285706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.285716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.285899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.285910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.286090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.286100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.286398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.286407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.286593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.286604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.286923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.286934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.287238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.287248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.287440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.287451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.287687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.287698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.287870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.287881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.288105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.288116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.288399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.288409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.288740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.288750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.289054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.289065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.289385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.289395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.289707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.289717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.290015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.290025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.290197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.290206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.290516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.290526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.290852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.290873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.291193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.291203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.291515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.291526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.291720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.291731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.292052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.292062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.292231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.292241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.292455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.292465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.292817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.292827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.293144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.293154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.293484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.293494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.293810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.293819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.294131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-11-06 10:25:58.294141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.852 qpair failed and we were unable to recover it. 00:33:54.852 [2024-11-06 10:25:58.294436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.294446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.294616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.294627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.294824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.294835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.294906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.294918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.295199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.295209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.295249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.295259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.295302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.295311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.295633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.295643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.295813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.295823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.296102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.296112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.296261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.296271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.296471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.296482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.296753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.296764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.296997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.297008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.297312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.297322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.297506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.297520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.297668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.297679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.297869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.297879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.298172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.298183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.298229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.298239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.298583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.298594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.298778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.298788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.299118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.299128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.299346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.299357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.299552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.299562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.299907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.299917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.300265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.300275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.300568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.300578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.300870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.300880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.301217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.301227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.301561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.301571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.301757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.301767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.301967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.301977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.302369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.302379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.302669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.302679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.302886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.302897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.303018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.303028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.303296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.303306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.853 qpair failed and we were unable to recover it. 00:33:54.853 [2024-11-06 10:25:58.303501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.853 [2024-11-06 10:25:58.303512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.303829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.303839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.304006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.304018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.304320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.304330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.304638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.304647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.304858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.304879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.305197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.305207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.305404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.305415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.305736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.305746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.305926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.305937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.306119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.306130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.306410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.306420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.306585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.306598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.306947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.306957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.307263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.307273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.307592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.307601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.307886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.307896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.308088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.308098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:54.854 [2024-11-06 10:25:58.308414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.854 [2024-11-06 10:25:58.308424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:54.854 qpair failed and we were unable to recover it. 00:33:55.125 [2024-11-06 10:25:58.308755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.125 [2024-11-06 10:25:58.308767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.125 qpair failed and we were unable to recover it. 00:33:55.125 [2024-11-06 10:25:58.309072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.125 [2024-11-06 10:25:58.309083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.125 qpair failed and we were unable to recover it. 00:33:55.125 [2024-11-06 10:25:58.309282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.125 [2024-11-06 10:25:58.309292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.125 qpair failed and we were unable to recover it. 00:33:55.125 [2024-11-06 10:25:58.309530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.309540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.309867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.309878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.310246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.310257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.310583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.310593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.310757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.310767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.311057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.311067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.311365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.311376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.311420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.311431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.311659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.311669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.312056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.312067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.312439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.312450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.312629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.312639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.312819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.312829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.313132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.313142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.313442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.313453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.313764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.313775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.314077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.314087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.314366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.314376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.314558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.314568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.314851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.314865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.315190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.315200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.315496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.315506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.315832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.315842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.316006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.316019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.316280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.316290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.316580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.316592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.316935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.316946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.317278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.317289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.317496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.317506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.317820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.317829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.318135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.318145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.318483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.318493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.318813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.318823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.319151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.319162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.319332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.319342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.319681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.319691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.320077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.320087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.320277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.126 [2024-11-06 10:25:58.320287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.126 qpair failed and we were unable to recover it. 00:33:55.126 [2024-11-06 10:25:58.320468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.320478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.320659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.320669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.320842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.320852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.321064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.321075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.321381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.321391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.321718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.321729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.322026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.322037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.322355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.322365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.322571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.322581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.322978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.322989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.323312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.323322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.323513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.323523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.323838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.323850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.323999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.324009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.324297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.324307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.324485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.324494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.324778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.324789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.325102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.325113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.325416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.325426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.325741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.325751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.326119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.326129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.326425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.326435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.326760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.326770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.326930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.326940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.327132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.327143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.327319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.327330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.327654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.327664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.327954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.327965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.328181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.328191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.328480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.328490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.328662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.328672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.329042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.329053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.329362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.329372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.329546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.329556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.329865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.329876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.330209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.330219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.330530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.330540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.127 [2024-11-06 10:25:58.330707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.127 [2024-11-06 10:25:58.330717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.127 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.330912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.330923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.331249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.331261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.331541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.331551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.331718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.331727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.332065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.332075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.332208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.332218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.332541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.332552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.332883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.332894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.333251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.333261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.333552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.333561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.333846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.333856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.334147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.334158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.334446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.334456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.334786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.334795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.335100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.335110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.335299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.335309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.335645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.335655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.335883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.335893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.336070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.336080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.336126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.336135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.336308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.336317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.336660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.336669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.336990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.337000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.337344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.337353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.337653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.337663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.337954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.337964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.338259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.338269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.338514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.338523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.338833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.338842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.339184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.128 [2024-11-06 10:25:58.339194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.128 qpair failed and we were unable to recover it. 00:33:55.128 [2024-11-06 10:25:58.339542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.339551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.339870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.339880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.340189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.340199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.340438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.340447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.340771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.340780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.340965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.340977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.341365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.341375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.341693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.341703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.341876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.341886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.342236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.342246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.342457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.342467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.342656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.342665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.342927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.342942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.343259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.343269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.343493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.343502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.343724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.343734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.343936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.343946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.344244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.344254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.344461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.344471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.344520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.344537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.344727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.344737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.345023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.345032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.345365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.345375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.345533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.345542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.345818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.345828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.346146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.346156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.346460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.346470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.346783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.346793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.347019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.347030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.347292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.347302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.347601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.347611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.347806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.347817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.348010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.348021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.348333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.348343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.348672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.348681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.348855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.348869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.349166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.349176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.349498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.349507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.129 qpair failed and we were unable to recover it. 00:33:55.129 [2024-11-06 10:25:58.349819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.129 [2024-11-06 10:25:58.349828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.350147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.350159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.350339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.350350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.350591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.350600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.350999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.351009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.351319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.351329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.351647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.351656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.352020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.352030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.352369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.352379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.352558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.352567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.352808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.352818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.353106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.353117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.353307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.353316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.353637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.353647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.353971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.353981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.354181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.354191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.354331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.354341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.354645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.354655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.354941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.354951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.355248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.355257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.355422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.355431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.355716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.355726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.356128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.356138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.356443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.356453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.356748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.356757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.357091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.130 [2024-11-06 10:25:58.357101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.130 qpair failed and we were unable to recover it. 00:33:55.130 [2024-11-06 10:25:58.357299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.357308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.357483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.357495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.357686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.357700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.358032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.358043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.358223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.358234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.358413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.358423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.358621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.358631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.358933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.358943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.359256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.359265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.359559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.359568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.359906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.359916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.360216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.360226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.360550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.360559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.360849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.360858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.361157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.361167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.361366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.361376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.361712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.361722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.361999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.362011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.362324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.362334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.362415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.362424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.362576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.362585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.362750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.362759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.362925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.362935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.363178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.363187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.363522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.363531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.363874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.363884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.364070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.364079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.364408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.364418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.364611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.364621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.364999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.365009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.365391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.365401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.365574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.365584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.365892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.365901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.365991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.366001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.366047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.366057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.366378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.366387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.131 qpair failed and we were unable to recover it. 00:33:55.131 [2024-11-06 10:25:58.366569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.131 [2024-11-06 10:25:58.366580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.366738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.366747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.367035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.367045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.367390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.367400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.367567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.367576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.367740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.367749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.368056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.368066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.368299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.368309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.368612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.368621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.368940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.368950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.369282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.369292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.369608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.369617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.369929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.369939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:55.132 [2024-11-06 10:25:58.370259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.370270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:33:55.132 [2024-11-06 10:25:58.370583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.370594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:55.132 [2024-11-06 10:25:58.370917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.370928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:55.132 [2024-11-06 10:25:58.371107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.371118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:55.132 [2024-11-06 10:25:58.371454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.371465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.371796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.371806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.372150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.372161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.372453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.372463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.372755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.372765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.373004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.373014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.373350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.373360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.373405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.373413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.373734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.373744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.374171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.374181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.374499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.374508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.374797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.374807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.374980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.374990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.375037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.375047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.375408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.375418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.132 [2024-11-06 10:25:58.375590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.132 [2024-11-06 10:25:58.375601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.132 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.375788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.375797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.376144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.376154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.376478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.376489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.376801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.376811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.376993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.377008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.377324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.377334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.377624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.377633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.377800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.377809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.378030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.378040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.378393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.378404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.378453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.378462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.378780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.378790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.378934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.378944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.379171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.379182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.379373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.379382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.379704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.379714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.380027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.380037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.380206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.380216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.380580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.380590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.380905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.380916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.381089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.381099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.381316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.381326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.381518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.381528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.381870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.381881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.382231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.382241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.382553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.382563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.382872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.382885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.383190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.383200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.383381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.383393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.383743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.383754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.384074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.384085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.384246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.384257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.384586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.384595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.133 [2024-11-06 10:25:58.384883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.133 [2024-11-06 10:25:58.384893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.133 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.385178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.385188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.385358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.385377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.385537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.385547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.385751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.385761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.386106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.386117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.386476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.386486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.386789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.386800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.386977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.386987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.387394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.387404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.387595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.387604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.387918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.387929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.388103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.388112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.388394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.388405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.388706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.388715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.388767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.388777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.389173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.389183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.389493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.389503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.389801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.389810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.390007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.390017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.390388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.390401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.390442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.390451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.390847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.390857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.391075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.391086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.391405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.391414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.391734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.391743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.391921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.391933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.392231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.392241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.392571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.392582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.392781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.392791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.393026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.393036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.393250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.393270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.393473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.393484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.393534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.393545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.393700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.393711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.394079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.394090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.394296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.394306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.394618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.134 [2024-11-06 10:25:58.394628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.134 qpair failed and we were unable to recover it. 00:33:55.134 [2024-11-06 10:25:58.394992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.395002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.395346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.395356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.395700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.395710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.395889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.395900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.396187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.396197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.396492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.396502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.396873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.396883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.397072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.397082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.397351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.397361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.397534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.397548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.397847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.397858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.398033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.398043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.398118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.398128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.398458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.398467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.398780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.398791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.399135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.399145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.399464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.399474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.399636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.399646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.399986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.399997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.400162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.400173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.400367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.400377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.400584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.400594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.400908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.400919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.401272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.401283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.401596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.401605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.401797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.401806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.402112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.135 [2024-11-06 10:25:58.402122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.135 qpair failed and we were unable to recover it. 00:33:55.135 [2024-11-06 10:25:58.402418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.402428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.402619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.402628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.402828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.402839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.403044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.403055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.403349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.403358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.403648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.403658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.403975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.403985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.404269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.404279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.404572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.404582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.404871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.404882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.405049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.405059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.405298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.405309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.405641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.405651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.405977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.405988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.406343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.406353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.406515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.406524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.406860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.406881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.407187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.407196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.407491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.407501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.407815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.407825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.408027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.408037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.408206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.408216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.408591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.408601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.408894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.408906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.409298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.409309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.409604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.409614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.409793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.409803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.410164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.410175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.136 [2024-11-06 10:25:58.410412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.410424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.410627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.410637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:55.136 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.136 [2024-11-06 10:25:58.411045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.411057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.411252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.411262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:55.136 [2024-11-06 10:25:58.411572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.411583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.411874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.411885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.136 [2024-11-06 10:25:58.412067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.136 [2024-11-06 10:25:58.412077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.136 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.412427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.412438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.412749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.412758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.413053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.413062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.413225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.413234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.413541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.413552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.413886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.413896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.414202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.414212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.414515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.414524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.414688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.414698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.414980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.414990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.415344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.415353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.415641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.415651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.415820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.415830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.416112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.416125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.416182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.416191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.416474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.416483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.416781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.416791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.416990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.417000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.417355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.417365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.417659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.417668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.417836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.417846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.418143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.418153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.418486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.418495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.418814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.418824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.419122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.419132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.419311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.419322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.419647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.419657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.419987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.419997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.420212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.420222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.420526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.420536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.420841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.420851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.421175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.421186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.421350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.421360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.421654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.421663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.421970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.421980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.422371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.422381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.422722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.137 [2024-11-06 10:25:58.422732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.137 qpair failed and we were unable to recover it. 00:33:55.137 [2024-11-06 10:25:58.422907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.422919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.423215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.423225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.423398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.423407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.423683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.423695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.424018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.424028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.424319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.424328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.424629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.424639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.424939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.424949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.425286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.425296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.425711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.425721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.426029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.426039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.426333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.426343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.426526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.426535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.426854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.426869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.427221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.427230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.427550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.427560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.427848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.427857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.428178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.428188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.428488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.428498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.428809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.428820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.429162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.429172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.429485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.429494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.429785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.429795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.430140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.430150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.430495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.430504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.430809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.430818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.431163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.431174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.431464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.431473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.431760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.431769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.432064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.432074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.432397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.432407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.432692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.432702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.432870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.432881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.433168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.433177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.433529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.433539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.433907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.433918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.434100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.434109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.434381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.434391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.138 [2024-11-06 10:25:58.434711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.138 [2024-11-06 10:25:58.434721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.138 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.434899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.434909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.435170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.435179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.435467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.435476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.435814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.435823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.436112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.436123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.436444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.436454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.436612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.436622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.436944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.436954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.437185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.437194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.437566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.437576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.437898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.437908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.438210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.438220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.438398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.438407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.438727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.438737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.439089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.439099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.439329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.439339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 Malloc0 00:33:55.139 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.139 [2024-11-06 10:25:58.439688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.439699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.139 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:55.139 [2024-11-06 10:25:58.439872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.439883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.440251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.440261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.440583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.440592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.440810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.440820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.441055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.441065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.441238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.441248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.441630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.441639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.441815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.441824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.442221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.442231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.442527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.442536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.442872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.442882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.443005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.139 [2024-11-06 10:25:58.443212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.443222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.443568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.443579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.443632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.443642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.443936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.443946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.444142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.444153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.444535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.444545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.444856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.444870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.445008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.445017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.139 [2024-11-06 10:25:58.445328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.139 [2024-11-06 10:25:58.445338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.139 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.445506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.445518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.445809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.445819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.446222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.446232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.446522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.446532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.446847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.446857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.447028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.447038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.447321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.447332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.447658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.447669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.448007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.448017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.448321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.448331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.448650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.448660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.449066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.449076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.449256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.449267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.449559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.449569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.449767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.449777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.449834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.449842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.450217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.450227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.450534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.450543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.450865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.450875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.451197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.451207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.140 [2024-11-06 10:25:58.451552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.451562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:55.140 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.140 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:55.140 [2024-11-06 10:25:58.451876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.451887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.452084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.452094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.452275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.452285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.452344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.452353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.452699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.452709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.453002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.453013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.453321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.453331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.453607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.453617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.453905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.453915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.454340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.140 [2024-11-06 10:25:58.454349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.140 qpair failed and we were unable to recover it. 00:33:55.140 [2024-11-06 10:25:58.454743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.454753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.455004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.455014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.455187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.455196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.455463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.455473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.455783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.455793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.455993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.456004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.456169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.456179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.456463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.456473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.456808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.456818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.457103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.457113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.457312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.457322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.457531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.457540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.457584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.457593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.457753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.457762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.458022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.458034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.458342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.458352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.458713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.458723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.459033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.459043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.459339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.459349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.141 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:55.141 [2024-11-06 10:25:58.459536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.459546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.141 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:55.141 [2024-11-06 10:25:58.459805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.459815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.460152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.460163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.460362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.460372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.460572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.460583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.460949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.460959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.461162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.461174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.461492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.461502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.461815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.461826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.462020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.462030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.462388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.462398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.462691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.462701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.462867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.462878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.463180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.463191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.463522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.463532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.463819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.463828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.464174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.464185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.464414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.141 [2024-11-06 10:25:58.464426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.141 qpair failed and we were unable to recover it. 00:33:55.141 [2024-11-06 10:25:58.464777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.464787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.465129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.465139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.465370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.465380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.465691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.465700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.465985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.465995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.466326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.466336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.466529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.466538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.466868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.466878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.467066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.467076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.467374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.467384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.467609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.467619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.467782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.467792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.142 [2024-11-06 10:25:58.468130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.468141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.468397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.468407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:55.142 [2024-11-06 10:25:58.468605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.468615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.142 [2024-11-06 10:25:58.468962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.468973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:55.142 [2024-11-06 10:25:58.469094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.469104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.469393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.469403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.469761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.469770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.469975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.469985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.470294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.470304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.470615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.470625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.470951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.470962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.471286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.471296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.471495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.471506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.471700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.471710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.471921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.471931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.472099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.472108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.472413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.472423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.472744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.472754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.473039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.473049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.473225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.473235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.473442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.473452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.473770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.473779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.474147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.474157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.474476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.474485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.474779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.142 [2024-11-06 10:25:58.474789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.142 qpair failed and we were unable to recover it. 00:33:55.142 [2024-11-06 10:25:58.475153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.143 [2024-11-06 10:25:58.475163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017490 with addr=10.0.0.2, port=4420 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.475251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.143 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.143 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:55.143 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.143 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:55.143 [2024-11-06 10:25:58.483785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.483860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.483889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.483898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.483906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.483927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.143 10:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4098431 00:33:55.143 [2024-11-06 10:25:58.493674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.493734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.493749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.493756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.493763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.493778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.503698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.503775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.503789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.503796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.503802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.503816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.513714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.513789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.513803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.513810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.513817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.513831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.523666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.523725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.523739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.523750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.523756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.523770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.533686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.533757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.533772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.533779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.533785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.533799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.543634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.543689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.543704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.543712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.543718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.543732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.553736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.553793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.553808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.553815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.553821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.553835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.563796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.563880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.563894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.563901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.563907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.563924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.573743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.573803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.573816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.573823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.573830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.573844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.583866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.583919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.583932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.583939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.583946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.583960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.593709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.143 [2024-11-06 10:25:58.593763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.143 [2024-11-06 10:25:58.593777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.143 [2024-11-06 10:25:58.593784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.143 [2024-11-06 10:25:58.593791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.143 [2024-11-06 10:25:58.593804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.143 qpair failed and we were unable to recover it. 00:33:55.143 [2024-11-06 10:25:58.603879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.144 [2024-11-06 10:25:58.603936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.144 [2024-11-06 10:25:58.603950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.144 [2024-11-06 10:25:58.603957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.144 [2024-11-06 10:25:58.603963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.144 [2024-11-06 10:25:58.603977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.144 qpair failed and we were unable to recover it. 00:33:55.144 [2024-11-06 10:25:58.613841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.144 [2024-11-06 10:25:58.613901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.144 [2024-11-06 10:25:58.613915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.144 [2024-11-06 10:25:58.613922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.144 [2024-11-06 10:25:58.613929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.144 [2024-11-06 10:25:58.613942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.144 qpair failed and we were unable to recover it. 00:33:55.406 [2024-11-06 10:25:58.623879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.406 [2024-11-06 10:25:58.623940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.406 [2024-11-06 10:25:58.623953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.406 [2024-11-06 10:25:58.623960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.406 [2024-11-06 10:25:58.623967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.406 [2024-11-06 10:25:58.623981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.406 qpair failed and we were unable to recover it. 00:33:55.406 [2024-11-06 10:25:58.633954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.406 [2024-11-06 10:25:58.634060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.406 [2024-11-06 10:25:58.634074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.406 [2024-11-06 10:25:58.634081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.406 [2024-11-06 10:25:58.634088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.406 [2024-11-06 10:25:58.634101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.406 qpair failed and we were unable to recover it. 00:33:55.406 [2024-11-06 10:25:58.644007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.644075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.644088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.644095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.644101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.644115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.653956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.654013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.654027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.654037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.654043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.654057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.664022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.664076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.664089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.664096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.664102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.664116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.674026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.674084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.674099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.674106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.674112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.674126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.684015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.684069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.684084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.684091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.684097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.684111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.694125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.694202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.694216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.694223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.694230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.694246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.704149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.704199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.704212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.704219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.704225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.704239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.714173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.714232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.714245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.714252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.714258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.714272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.724368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.724435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.724449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.724456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.724462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.724475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.734287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.734358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.734371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.734378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.734385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.734399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.744260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.744362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.744376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.744383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.744389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.744403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.754341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.754402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.754415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.754422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.754429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.754442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.407 qpair failed and we were unable to recover it. 00:33:55.407 [2024-11-06 10:25:58.764326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.407 [2024-11-06 10:25:58.764380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.407 [2024-11-06 10:25:58.764393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.407 [2024-11-06 10:25:58.764400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.407 [2024-11-06 10:25:58.764407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.407 [2024-11-06 10:25:58.764420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.774338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.774386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.774399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.774406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.774412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.774425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.784232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.784283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.784297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.784307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.784313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.784327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.794370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.794427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.794440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.794447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.794453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.794467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.804452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.804504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.804518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.804525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.804531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.804544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.814445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.814495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.814509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.814516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.814522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.814535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.824487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.824539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.824552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.824559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.824565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.824582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.834507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.834563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.834577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.834584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.834590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.834604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.844535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.844592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.844606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.844613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.844619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.844632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.854589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.854664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.854689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.854697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.854704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.854724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.864630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.864681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.864696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.864703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.864709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.864724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.874491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.874545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.874559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.874567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.874573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.874587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.884647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.884721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.884734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.884741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.884747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.884761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.894668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.408 [2024-11-06 10:25:58.894755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.408 [2024-11-06 10:25:58.894770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.408 [2024-11-06 10:25:58.894777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.408 [2024-11-06 10:25:58.894783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.408 [2024-11-06 10:25:58.894797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.408 qpair failed and we were unable to recover it. 00:33:55.408 [2024-11-06 10:25:58.904675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.409 [2024-11-06 10:25:58.904725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.409 [2024-11-06 10:25:58.904739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.409 [2024-11-06 10:25:58.904746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.409 [2024-11-06 10:25:58.904752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.409 [2024-11-06 10:25:58.904765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.409 qpair failed and we were unable to recover it. 00:33:55.672 [2024-11-06 10:25:58.914716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.672 [2024-11-06 10:25:58.914770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.672 [2024-11-06 10:25:58.914783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.672 [2024-11-06 10:25:58.914794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.672 [2024-11-06 10:25:58.914801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.672 [2024-11-06 10:25:58.914815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.672 qpair failed and we were unable to recover it. 00:33:55.672 [2024-11-06 10:25:58.924668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.672 [2024-11-06 10:25:58.924761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.672 [2024-11-06 10:25:58.924776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.672 [2024-11-06 10:25:58.924783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.672 [2024-11-06 10:25:58.924789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.672 [2024-11-06 10:25:58.924802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.672 qpair failed and we were unable to recover it. 00:33:55.672 [2024-11-06 10:25:58.934781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.672 [2024-11-06 10:25:58.934834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.672 [2024-11-06 10:25:58.934848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.672 [2024-11-06 10:25:58.934855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.672 [2024-11-06 10:25:58.934861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.672 [2024-11-06 10:25:58.934880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.672 qpair failed and we were unable to recover it. 00:33:55.672 [2024-11-06 10:25:58.944693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.672 [2024-11-06 10:25:58.944756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.672 [2024-11-06 10:25:58.944769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.672 [2024-11-06 10:25:58.944776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.672 [2024-11-06 10:25:58.944783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.672 [2024-11-06 10:25:58.944797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.672 qpair failed and we were unable to recover it. 00:33:55.672 [2024-11-06 10:25:58.954875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.672 [2024-11-06 10:25:58.954933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.672 [2024-11-06 10:25:58.954947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.672 [2024-11-06 10:25:58.954954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.672 [2024-11-06 10:25:58.954960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.672 [2024-11-06 10:25:58.954977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.672 qpair failed and we were unable to recover it. 00:33:55.672 [2024-11-06 10:25:58.964885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.672 [2024-11-06 10:25:58.964941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.672 [2024-11-06 10:25:58.964955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.672 [2024-11-06 10:25:58.964962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.672 [2024-11-06 10:25:58.964968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.672 [2024-11-06 10:25:58.964982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.672 qpair failed and we were unable to recover it. 00:33:55.672 [2024-11-06 10:25:58.974896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.672 [2024-11-06 10:25:58.974950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.672 [2024-11-06 10:25:58.974963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.672 [2024-11-06 10:25:58.974970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.672 [2024-11-06 10:25:58.974977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.672 [2024-11-06 10:25:58.974991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.672 qpair failed and we were unable to recover it. 00:33:55.672 [2024-11-06 10:25:58.984883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.672 [2024-11-06 10:25:58.984936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.672 [2024-11-06 10:25:58.984951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.672 [2024-11-06 10:25:58.984960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.672 [2024-11-06 10:25:58.984967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.672 [2024-11-06 10:25:58.984982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.672 qpair failed and we were unable to recover it. 00:33:55.672 [2024-11-06 10:25:58.994960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.672 [2024-11-06 10:25:58.995047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.672 [2024-11-06 10:25:58.995061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.672 [2024-11-06 10:25:58.995068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.672 [2024-11-06 10:25:58.995074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.672 [2024-11-06 10:25:58.995088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.672 qpair failed and we were unable to recover it. 00:33:55.672 [2024-11-06 10:25:59.004959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.672 [2024-11-06 10:25:59.005060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.672 [2024-11-06 10:25:59.005075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.672 [2024-11-06 10:25:59.005083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.672 [2024-11-06 10:25:59.005089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.672 [2024-11-06 10:25:59.005104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.672 qpair failed and we were unable to recover it. 00:33:55.672 [2024-11-06 10:25:59.015002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.672 [2024-11-06 10:25:59.015054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.672 [2024-11-06 10:25:59.015067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.672 [2024-11-06 10:25:59.015075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.672 [2024-11-06 10:25:59.015081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.672 [2024-11-06 10:25:59.015095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.672 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.025006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.025061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.025074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.025082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.025088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.025101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.035075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.035131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.035145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.035152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.035158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.035172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.045114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.045175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.045195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.045202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.045209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.045222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.055112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.055165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.055179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.055186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.055192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.055205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.065134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.065189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.065202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.065209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.065216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.065229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.075152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.075206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.075220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.075227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.075233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.075246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.085217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.085312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.085326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.085334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.085340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.085357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.095208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.095268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.095282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.095290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.095296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.095310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.105229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.105285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.105299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.105306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.105313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.105326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.115292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.115349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.115362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.115369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.115375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.115390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.125312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.125371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.125385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.125393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.125399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.125413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.135335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.135392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.135405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.135413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.135419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.135433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.145362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.673 [2024-11-06 10:25:59.145414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.673 [2024-11-06 10:25:59.145428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.673 [2024-11-06 10:25:59.145435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.673 [2024-11-06 10:25:59.145441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.673 [2024-11-06 10:25:59.145455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.673 qpair failed and we were unable to recover it. 00:33:55.673 [2024-11-06 10:25:59.155397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.674 [2024-11-06 10:25:59.155470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.674 [2024-11-06 10:25:59.155484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.674 [2024-11-06 10:25:59.155491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.674 [2024-11-06 10:25:59.155497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.674 [2024-11-06 10:25:59.155512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.674 qpair failed and we were unable to recover it. 00:33:55.674 [2024-11-06 10:25:59.165426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.674 [2024-11-06 10:25:59.165500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.674 [2024-11-06 10:25:59.165514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.674 [2024-11-06 10:25:59.165521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.674 [2024-11-06 10:25:59.165528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.674 [2024-11-06 10:25:59.165543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.674 qpair failed and we were unable to recover it. 00:33:55.935 [2024-11-06 10:25:59.175416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.935 [2024-11-06 10:25:59.175470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.935 [2024-11-06 10:25:59.175487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.935 [2024-11-06 10:25:59.175495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.935 [2024-11-06 10:25:59.175501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.935 [2024-11-06 10:25:59.175515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.935 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.185467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.185524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.185537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.185545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.185551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.185565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.195501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.195557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.195571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.195578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.195585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.195599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.205525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.205618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.205644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.205653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.205660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.205679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.215568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.215659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.215685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.215694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.215701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.215726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.225473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.225524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.225540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.225548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.225554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.225570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.235506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.235565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.235579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.235587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.235593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.235607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.245657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.245739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.245753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.245760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.245767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.245781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.255674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.255739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.255753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.255760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.255767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.255780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.265577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.265639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.265653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.265661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.265667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.265682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.275734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.275787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.275801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.275808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.275815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.275829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.285774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.285835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.285849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.285858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.285868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.285882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.295791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.295866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.295880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.295887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.295894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.295908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.305815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.305868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.305886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.305894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.305900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.305915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.315841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.315901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.315915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.315923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.936 [2024-11-06 10:25:59.315929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.936 [2024-11-06 10:25:59.315943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.936 qpair failed and we were unable to recover it. 00:33:55.936 [2024-11-06 10:25:59.325886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.936 [2024-11-06 10:25:59.325979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.936 [2024-11-06 10:25:59.325992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.936 [2024-11-06 10:25:59.325999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.937 [2024-11-06 10:25:59.326006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.937 [2024-11-06 10:25:59.326020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.937 qpair failed and we were unable to recover it. 00:33:55.937 [2024-11-06 10:25:59.335821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.937 [2024-11-06 10:25:59.335881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.937 [2024-11-06 10:25:59.335896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.937 [2024-11-06 10:25:59.335904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.937 [2024-11-06 10:25:59.335911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.937 [2024-11-06 10:25:59.335926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.937 qpair failed and we were unable to recover it. 00:33:55.937 [2024-11-06 10:25:59.345924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.937 [2024-11-06 10:25:59.345979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.937 [2024-11-06 10:25:59.345994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.937 [2024-11-06 10:25:59.346001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.937 [2024-11-06 10:25:59.346008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.937 [2024-11-06 10:25:59.346025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.937 qpair failed and we were unable to recover it. 00:33:55.937 [2024-11-06 10:25:59.355851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.937 [2024-11-06 10:25:59.355953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.937 [2024-11-06 10:25:59.355966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.937 [2024-11-06 10:25:59.355974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.937 [2024-11-06 10:25:59.355980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.937 [2024-11-06 10:25:59.355994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.937 qpair failed and we were unable to recover it. 00:33:55.937 [2024-11-06 10:25:59.365987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.937 [2024-11-06 10:25:59.366041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.937 [2024-11-06 10:25:59.366055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.937 [2024-11-06 10:25:59.366062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.937 [2024-11-06 10:25:59.366068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.937 [2024-11-06 10:25:59.366082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.937 qpair failed and we were unable to recover it. 00:33:55.937 [2024-11-06 10:25:59.375972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.937 [2024-11-06 10:25:59.376028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.937 [2024-11-06 10:25:59.376041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.937 [2024-11-06 10:25:59.376048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.937 [2024-11-06 10:25:59.376055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.937 [2024-11-06 10:25:59.376069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.937 qpair failed and we were unable to recover it. 00:33:55.937 [2024-11-06 10:25:59.386007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.937 [2024-11-06 10:25:59.386064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.937 [2024-11-06 10:25:59.386077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.937 [2024-11-06 10:25:59.386084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.937 [2024-11-06 10:25:59.386090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.937 [2024-11-06 10:25:59.386104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.937 qpair failed and we were unable to recover it. 00:33:55.937 [2024-11-06 10:25:59.396058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.937 [2024-11-06 10:25:59.396116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.937 [2024-11-06 10:25:59.396130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.937 [2024-11-06 10:25:59.396137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.937 [2024-11-06 10:25:59.396144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.937 [2024-11-06 10:25:59.396157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.937 qpair failed and we were unable to recover it. 00:33:55.937 [2024-11-06 10:25:59.406098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.937 [2024-11-06 10:25:59.406152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.937 [2024-11-06 10:25:59.406165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.937 [2024-11-06 10:25:59.406173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.937 [2024-11-06 10:25:59.406179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.937 [2024-11-06 10:25:59.406193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.937 qpair failed and we were unable to recover it. 00:33:55.937 [2024-11-06 10:25:59.416109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.937 [2024-11-06 10:25:59.416162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.937 [2024-11-06 10:25:59.416176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.937 [2024-11-06 10:25:59.416183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.937 [2024-11-06 10:25:59.416189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.937 [2024-11-06 10:25:59.416203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.937 qpair failed and we were unable to recover it. 00:33:55.937 [2024-11-06 10:25:59.426145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.937 [2024-11-06 10:25:59.426194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.937 [2024-11-06 10:25:59.426207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.937 [2024-11-06 10:25:59.426215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.937 [2024-11-06 10:25:59.426221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:55.937 [2024-11-06 10:25:59.426235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:55.937 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.436174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.436229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.436247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.436254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.436261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.436274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.446208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.446263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.446276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.446283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.446289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.446303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.456233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.456285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.456298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.456305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.456312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.456325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.466264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.466360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.466373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.466381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.466388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.466401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.476293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.476389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.476403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.476410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.476420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.476433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.486288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.486343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.486359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.486367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.486373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.486388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.496347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.496403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.496417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.496424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.496430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.496444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.506340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.506394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.506408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.506415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.506421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.506435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.516389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.516446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.516460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.516468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.516474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.516488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.526447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.526500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.526514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.526521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.526528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.526541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.536312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.536365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.536379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.536386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.536393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.536406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.199 [2024-11-06 10:25:59.546532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.199 [2024-11-06 10:25:59.546639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.199 [2024-11-06 10:25:59.546654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.199 [2024-11-06 10:25:59.546661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.199 [2024-11-06 10:25:59.546667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.199 [2024-11-06 10:25:59.546681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.199 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.556516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.556570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.556584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.556591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.556597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.556611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.566542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.566609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.566639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.566648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.566655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.566675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.576471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.576553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.576570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.576578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.576584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.576601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.586582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.586637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.586652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.586660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.586666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.586681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.596633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.596690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.596705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.596712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.596719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.596733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.606678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.606737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.606762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.606770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.606782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.606802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.616665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.616721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.616736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.616743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.616750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.616765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.626693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.626749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.626763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.626771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.626778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.626792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.636751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.636810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.636824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.636832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.636838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.636852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.646780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.646841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.646856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.646868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.646875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.646890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.656737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.656839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.656855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.656867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.656874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.656889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.666799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.666894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.666909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.666916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.666923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.666937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.676852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.676946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.676960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.676968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.676975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.676989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.686894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.686950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.686964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.686971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.686978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.686992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.200 [2024-11-06 10:25:59.696907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.200 [2024-11-06 10:25:59.696996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.200 [2024-11-06 10:25:59.697017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.200 [2024-11-06 10:25:59.697025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.200 [2024-11-06 10:25:59.697031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.200 [2024-11-06 10:25:59.697046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.200 qpair failed and we were unable to recover it. 00:33:56.462 [2024-11-06 10:25:59.706811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.462 [2024-11-06 10:25:59.706953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.462 [2024-11-06 10:25:59.706969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.462 [2024-11-06 10:25:59.706976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.462 [2024-11-06 10:25:59.706983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.462 [2024-11-06 10:25:59.706997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.462 qpair failed and we were unable to recover it. 00:33:56.462 [2024-11-06 10:25:59.716965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.462 [2024-11-06 10:25:59.717023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.462 [2024-11-06 10:25:59.717037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.462 [2024-11-06 10:25:59.717045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.462 [2024-11-06 10:25:59.717051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.462 [2024-11-06 10:25:59.717065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.462 qpair failed and we were unable to recover it. 00:33:56.462 [2024-11-06 10:25:59.727001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.727063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.727076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.727083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.727089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.727104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.736981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.737033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.737047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.737054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.737065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.737079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.747038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.747090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.747104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.747112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.747118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.747132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.757010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.757065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.757080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.757088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.757095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.757110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.767104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.767160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.767174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.767182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.767188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.767202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.777106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.777155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.777169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.777176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.777183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.777197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.787142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.787197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.787211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.787218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.787224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.787238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.797076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.797132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.797146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.797153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.797159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.797173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.807197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.807253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.807267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.807274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.807281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.807295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.817251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.817309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.817323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.817331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.817337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.817351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.827247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.827304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.827321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.827329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.827336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.827349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.837223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.837284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.837297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.837305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.837311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.837325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.847210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.463 [2024-11-06 10:25:59.847261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.463 [2024-11-06 10:25:59.847274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.463 [2024-11-06 10:25:59.847282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.463 [2024-11-06 10:25:59.847288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.463 [2024-11-06 10:25:59.847301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.463 qpair failed and we were unable to recover it. 00:33:56.463 [2024-11-06 10:25:59.857354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.464 [2024-11-06 10:25:59.857402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.464 [2024-11-06 10:25:59.857416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.464 [2024-11-06 10:25:59.857423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.464 [2024-11-06 10:25:59.857430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.464 [2024-11-06 10:25:59.857443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.464 qpair failed and we were unable to recover it. 00:33:56.464 [2024-11-06 10:25:59.867348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.464 [2024-11-06 10:25:59.867398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.464 [2024-11-06 10:25:59.867412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.464 [2024-11-06 10:25:59.867419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.464 [2024-11-06 10:25:59.867429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.464 [2024-11-06 10:25:59.867443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.464 qpair failed and we were unable to recover it. 00:33:56.464 [2024-11-06 10:25:59.877432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.464 [2024-11-06 10:25:59.877493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.464 [2024-11-06 10:25:59.877507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.464 [2024-11-06 10:25:59.877514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.464 [2024-11-06 10:25:59.877521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.464 [2024-11-06 10:25:59.877535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.464 qpair failed and we were unable to recover it. 00:33:56.464 [2024-11-06 10:25:59.887445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.464 [2024-11-06 10:25:59.887500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.464 [2024-11-06 10:25:59.887513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.464 [2024-11-06 10:25:59.887521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.464 [2024-11-06 10:25:59.887528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.464 [2024-11-06 10:25:59.887541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.464 qpair failed and we were unable to recover it. 00:33:56.464 [2024-11-06 10:25:59.897534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.464 [2024-11-06 10:25:59.897593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.464 [2024-11-06 10:25:59.897607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.464 [2024-11-06 10:25:59.897614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.464 [2024-11-06 10:25:59.897620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.464 [2024-11-06 10:25:59.897634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.464 qpair failed and we were unable to recover it. 00:33:56.464 [2024-11-06 10:25:59.907489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.464 [2024-11-06 10:25:59.907588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.464 [2024-11-06 10:25:59.907602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.464 [2024-11-06 10:25:59.907610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.464 [2024-11-06 10:25:59.907616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.464 [2024-11-06 10:25:59.907631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.464 qpair failed and we were unable to recover it. 00:33:56.464 [2024-11-06 10:25:59.917539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.464 [2024-11-06 10:25:59.917617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.464 [2024-11-06 10:25:59.917631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.464 [2024-11-06 10:25:59.917639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.464 [2024-11-06 10:25:59.917645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.464 [2024-11-06 10:25:59.917658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.464 qpair failed and we were unable to recover it. 00:33:56.464 [2024-11-06 10:25:59.927605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.464 [2024-11-06 10:25:59.927682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.464 [2024-11-06 10:25:59.927696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.464 [2024-11-06 10:25:59.927703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.464 [2024-11-06 10:25:59.927709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.464 [2024-11-06 10:25:59.927723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.464 qpair failed and we were unable to recover it. 00:33:56.464 [2024-11-06 10:25:59.937577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.464 [2024-11-06 10:25:59.937659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.464 [2024-11-06 10:25:59.937672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.464 [2024-11-06 10:25:59.937679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.464 [2024-11-06 10:25:59.937686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.464 [2024-11-06 10:25:59.937699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.464 qpair failed and we were unable to recover it. 00:33:56.464 [2024-11-06 10:25:59.947605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.464 [2024-11-06 10:25:59.947658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.464 [2024-11-06 10:25:59.947671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.464 [2024-11-06 10:25:59.947679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.464 [2024-11-06 10:25:59.947685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.464 [2024-11-06 10:25:59.947698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.464 qpair failed and we were unable to recover it. 00:33:56.464 [2024-11-06 10:25:59.957657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.464 [2024-11-06 10:25:59.957759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.464 [2024-11-06 10:25:59.957775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.464 [2024-11-06 10:25:59.957784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.464 [2024-11-06 10:25:59.957790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.464 [2024-11-06 10:25:59.957805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.464 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:25:59.967672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:25:59.967761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:25:59.967775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:25:59.967782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:25:59.967789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:25:59.967803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:25:59.977688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:25:59.977742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:25:59.977755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:25:59.977762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:25:59.977769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:25:59.977782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:25:59.987712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:25:59.987769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:25:59.987783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:25:59.987790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:25:59.987796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:25:59.987810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:25:59.997798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:25:59.997859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:25:59.997876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:25:59.997884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:25:59.997894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:25:59.997908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:26:00.007700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:26:00.007764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:26:00.007781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:26:00.007788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:26:00.007795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:26:00.007810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:26:00.017835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:26:00.017911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:26:00.017936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:26:00.017944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:26:00.017952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:26:00.017966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:26:00.027733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:26:00.027794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:26:00.027808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:26:00.027816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:26:00.027822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:26:00.027836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:26:00.037931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:26:00.037990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:26:00.038003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:26:00.038011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:26:00.038017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:26:00.038031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:26:00.047895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:26:00.047948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:26:00.047961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:26:00.047969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:26:00.047975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:26:00.047990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:26:00.057911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:26:00.057963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:26:00.057976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:26:00.057984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:26:00.057990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:26:00.058005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:26:00.067871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:26:00.067961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:26:00.067974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:26:00.067982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:26:00.067988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:26:00.068002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:26:00.077978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:26:00.078040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:26:00.078054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:26:00.078061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:26:00.078068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.728 [2024-11-06 10:26:00.078081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.728 qpair failed and we were unable to recover it. 00:33:56.728 [2024-11-06 10:26:00.087983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.728 [2024-11-06 10:26:00.088061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.728 [2024-11-06 10:26:00.088077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.728 [2024-11-06 10:26:00.088085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.728 [2024-11-06 10:26:00.088092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.088106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.098037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.098094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.098109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.098117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.098124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.098141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.108055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.108158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.108173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.108181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.108188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.108202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.118059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.118131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.118145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.118152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.118159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.118173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.128145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.128221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.128234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.128242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.128252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.128266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.138190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.138246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.138260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.138267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.138274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.138288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.148032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.148084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.148097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.148105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.148111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.148125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.158201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.158260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.158274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.158282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.158289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.158302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.168241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.168297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.168310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.168318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.168324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.168338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.178241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.178293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.178307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.178314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.178321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.178334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.188320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.188399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.188412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.188419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.188427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.188441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.198230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.198290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.198305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.198313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.198319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.198334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.208349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.208444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.208459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.208467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.208473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.208488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.729 qpair failed and we were unable to recover it. 00:33:56.729 [2024-11-06 10:26:00.218357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.729 [2024-11-06 10:26:00.218406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.729 [2024-11-06 10:26:00.218423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.729 [2024-11-06 10:26:00.218431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.729 [2024-11-06 10:26:00.218438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.729 [2024-11-06 10:26:00.218452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.730 qpair failed and we were unable to recover it. 00:33:56.994 [2024-11-06 10:26:00.228390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-11-06 10:26:00.228445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-11-06 10:26:00.228459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-11-06 10:26:00.228467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-11-06 10:26:00.228473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.994 [2024-11-06 10:26:00.228487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-11-06 10:26:00.238405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-11-06 10:26:00.238458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-11-06 10:26:00.238472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-11-06 10:26:00.238479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-11-06 10:26:00.238486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.994 [2024-11-06 10:26:00.238499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-11-06 10:26:00.248455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-11-06 10:26:00.248511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-11-06 10:26:00.248524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-11-06 10:26:00.248532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-11-06 10:26:00.248539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.994 [2024-11-06 10:26:00.248552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-11-06 10:26:00.258471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-11-06 10:26:00.258529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-11-06 10:26:00.258543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-11-06 10:26:00.258550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-11-06 10:26:00.258560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.994 [2024-11-06 10:26:00.258574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-11-06 10:26:00.268504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-11-06 10:26:00.268557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-11-06 10:26:00.268570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-11-06 10:26:00.268578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-11-06 10:26:00.268584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.994 [2024-11-06 10:26:00.268598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-11-06 10:26:00.278507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-11-06 10:26:00.278568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-11-06 10:26:00.278584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-11-06 10:26:00.278593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-11-06 10:26:00.278600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.994 [2024-11-06 10:26:00.278615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-11-06 10:26:00.288425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-11-06 10:26:00.288480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-11-06 10:26:00.288494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-11-06 10:26:00.288501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-11-06 10:26:00.288508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.994 [2024-11-06 10:26:00.288521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-11-06 10:26:00.298446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-11-06 10:26:00.298503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-11-06 10:26:00.298517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-11-06 10:26:00.298525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-11-06 10:26:00.298531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.994 [2024-11-06 10:26:00.298545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.994 [2024-11-06 10:26:00.308620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.994 [2024-11-06 10:26:00.308676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.994 [2024-11-06 10:26:00.308690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.994 [2024-11-06 10:26:00.308698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.994 [2024-11-06 10:26:00.308704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.994 [2024-11-06 10:26:00.308718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.994 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.318636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.318698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.318723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.318732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.318740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.318760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.328678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.328732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.328747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.328755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.328762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.328777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.338547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.338602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.338617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.338624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.338631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.338646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.348715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.348790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.348812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.348821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.348828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.348842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.358765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.358849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.358867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.358875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.358882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.358896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.368786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.368844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.368858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.368870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.368877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.368891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.378770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.378819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.378833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.378840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.378848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.378865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.388809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.388866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.388881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.388888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.388899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.388913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.398764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.398825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.398839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.398847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.398853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.398872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.408879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.408937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.408951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.408958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.408965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.408979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.418802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.418856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.418873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.418881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.418888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.418902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.428921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.429023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.429036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.429045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.429051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.429065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.438957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.439060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.439075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.439082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.439089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.439103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.449007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.449062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.449076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.449083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.449090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.449104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.459001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.459055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.459069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.459076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.459083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.459097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.995 qpair failed and we were unable to recover it. 00:33:56.995 [2024-11-06 10:26:00.468936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.995 [2024-11-06 10:26:00.469000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.995 [2024-11-06 10:26:00.469014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.995 [2024-11-06 10:26:00.469021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.995 [2024-11-06 10:26:00.469028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.995 [2024-11-06 10:26:00.469042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.996 qpair failed and we were unable to recover it. 00:33:56.996 [2024-11-06 10:26:00.479102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.996 [2024-11-06 10:26:00.479179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.996 [2024-11-06 10:26:00.479197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.996 [2024-11-06 10:26:00.479204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.996 [2024-11-06 10:26:00.479211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.996 [2024-11-06 10:26:00.479226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.996 qpair failed and we were unable to recover it. 00:33:56.996 [2024-11-06 10:26:00.489109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.996 [2024-11-06 10:26:00.489170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.996 [2024-11-06 10:26:00.489187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.996 [2024-11-06 10:26:00.489194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.996 [2024-11-06 10:26:00.489200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:56.996 [2024-11-06 10:26:00.489215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:56.996 qpair failed and we were unable to recover it. 00:33:57.258 [2024-11-06 10:26:00.499130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.258 [2024-11-06 10:26:00.499183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.258 [2024-11-06 10:26:00.499197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.258 [2024-11-06 10:26:00.499205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.258 [2024-11-06 10:26:00.499212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.258 [2024-11-06 10:26:00.499226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.258 qpair failed and we were unable to recover it. 00:33:57.258 [2024-11-06 10:26:00.509033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.258 [2024-11-06 10:26:00.509089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.258 [2024-11-06 10:26:00.509103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.258 [2024-11-06 10:26:00.509111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.258 [2024-11-06 10:26:00.509118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.258 [2024-11-06 10:26:00.509132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.258 qpair failed and we were unable to recover it. 00:33:57.258 [2024-11-06 10:26:00.519066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.258 [2024-11-06 10:26:00.519122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.258 [2024-11-06 10:26:00.519135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.258 [2024-11-06 10:26:00.519143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.258 [2024-11-06 10:26:00.519154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.258 [2024-11-06 10:26:00.519167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.258 qpair failed and we were unable to recover it. 00:33:57.258 [2024-11-06 10:26:00.529247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.258 [2024-11-06 10:26:00.529304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.258 [2024-11-06 10:26:00.529317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.258 [2024-11-06 10:26:00.529325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.258 [2024-11-06 10:26:00.529331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.258 [2024-11-06 10:26:00.529346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.258 qpair failed and we were unable to recover it. 00:33:57.258 [2024-11-06 10:26:00.539234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.258 [2024-11-06 10:26:00.539294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.258 [2024-11-06 10:26:00.539307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.258 [2024-11-06 10:26:00.539315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.539322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.539335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.549258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.259 [2024-11-06 10:26:00.549318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.259 [2024-11-06 10:26:00.549331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.259 [2024-11-06 10:26:00.549339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.549346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.549359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.559280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.259 [2024-11-06 10:26:00.559336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.259 [2024-11-06 10:26:00.559350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.259 [2024-11-06 10:26:00.559357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.559364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.559378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.569343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.259 [2024-11-06 10:26:00.569404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.259 [2024-11-06 10:26:00.569418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.259 [2024-11-06 10:26:00.569426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.569432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.569446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.579333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.259 [2024-11-06 10:26:00.579390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.259 [2024-11-06 10:26:00.579403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.259 [2024-11-06 10:26:00.579411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.579417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.579431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.589364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.259 [2024-11-06 10:26:00.589416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.259 [2024-11-06 10:26:00.589430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.259 [2024-11-06 10:26:00.589438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.589445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.589458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.599403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.259 [2024-11-06 10:26:00.599460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.259 [2024-11-06 10:26:00.599474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.259 [2024-11-06 10:26:00.599481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.599488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.599502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.609299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.259 [2024-11-06 10:26:00.609355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.259 [2024-11-06 10:26:00.609372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.259 [2024-11-06 10:26:00.609379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.609386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.609400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.619441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.259 [2024-11-06 10:26:00.619493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.259 [2024-11-06 10:26:00.619507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.259 [2024-11-06 10:26:00.619514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.619520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.619534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.629481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.259 [2024-11-06 10:26:00.629533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.259 [2024-11-06 10:26:00.629546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.259 [2024-11-06 10:26:00.629554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.629560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.629574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.639517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.259 [2024-11-06 10:26:00.639589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.259 [2024-11-06 10:26:00.639603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.259 [2024-11-06 10:26:00.639610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.639617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.639630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.649437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.259 [2024-11-06 10:26:00.649537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.259 [2024-11-06 10:26:00.649551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.259 [2024-11-06 10:26:00.649558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.259 [2024-11-06 10:26:00.649568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.259 [2024-11-06 10:26:00.649582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.259 qpair failed and we were unable to recover it. 00:33:57.259 [2024-11-06 10:26:00.659560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.260 [2024-11-06 10:26:00.659619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.260 [2024-11-06 10:26:00.659637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.260 [2024-11-06 10:26:00.659644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.260 [2024-11-06 10:26:00.659651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.260 [2024-11-06 10:26:00.659666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.260 qpair failed and we were unable to recover it. 00:33:57.260 [2024-11-06 10:26:00.669584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.260 [2024-11-06 10:26:00.669669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.260 [2024-11-06 10:26:00.669695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.260 [2024-11-06 10:26:00.669705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.260 [2024-11-06 10:26:00.669712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.260 [2024-11-06 10:26:00.669732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.260 qpair failed and we were unable to recover it. 00:33:57.260 [2024-11-06 10:26:00.679609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.260 [2024-11-06 10:26:00.679668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.260 [2024-11-06 10:26:00.679684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.260 [2024-11-06 10:26:00.679691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.260 [2024-11-06 10:26:00.679698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.260 [2024-11-06 10:26:00.679713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.260 qpair failed and we were unable to recover it. 00:33:57.260 [2024-11-06 10:26:00.689643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.260 [2024-11-06 10:26:00.689698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.260 [2024-11-06 10:26:00.689712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.260 [2024-11-06 10:26:00.689720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.260 [2024-11-06 10:26:00.689726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.260 [2024-11-06 10:26:00.689740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.260 qpair failed and we were unable to recover it. 00:33:57.260 [2024-11-06 10:26:00.699690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.260 [2024-11-06 10:26:00.699742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.260 [2024-11-06 10:26:00.699757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.260 [2024-11-06 10:26:00.699765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.260 [2024-11-06 10:26:00.699772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.260 [2024-11-06 10:26:00.699786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.260 qpair failed and we were unable to recover it. 00:33:57.260 [2024-11-06 10:26:00.709700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.260 [2024-11-06 10:26:00.709752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.260 [2024-11-06 10:26:00.709766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.260 [2024-11-06 10:26:00.709773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.260 [2024-11-06 10:26:00.709780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.260 [2024-11-06 10:26:00.709794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.260 qpair failed and we were unable to recover it. 00:33:57.260 [2024-11-06 10:26:00.719738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.260 [2024-11-06 10:26:00.719795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.260 [2024-11-06 10:26:00.719809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.260 [2024-11-06 10:26:00.719816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.260 [2024-11-06 10:26:00.719823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.260 [2024-11-06 10:26:00.719837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.260 qpair failed and we were unable to recover it. 00:33:57.260 [2024-11-06 10:26:00.729858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.260 [2024-11-06 10:26:00.729943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.260 [2024-11-06 10:26:00.729957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.260 [2024-11-06 10:26:00.729965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.260 [2024-11-06 10:26:00.729972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.260 [2024-11-06 10:26:00.729986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.260 qpair failed and we were unable to recover it. 00:33:57.260 [2024-11-06 10:26:00.739743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.260 [2024-11-06 10:26:00.739793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.260 [2024-11-06 10:26:00.739812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.260 [2024-11-06 10:26:00.739820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.260 [2024-11-06 10:26:00.739826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.260 [2024-11-06 10:26:00.739842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.260 qpair failed and we were unable to recover it. 00:33:57.260 [2024-11-06 10:26:00.749847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.260 [2024-11-06 10:26:00.749902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.260 [2024-11-06 10:26:00.749917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.260 [2024-11-06 10:26:00.749925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.260 [2024-11-06 10:26:00.749932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.260 [2024-11-06 10:26:00.749946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.260 qpair failed and we were unable to recover it. 00:33:57.523 [2024-11-06 10:26:00.759926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.523 [2024-11-06 10:26:00.759988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.523 [2024-11-06 10:26:00.760002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.523 [2024-11-06 10:26:00.760012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.523 [2024-11-06 10:26:00.760020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.523 [2024-11-06 10:26:00.760035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.523 qpair failed and we were unable to recover it. 00:33:57.523 [2024-11-06 10:26:00.769746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.523 [2024-11-06 10:26:00.769849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.523 [2024-11-06 10:26:00.769868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.523 [2024-11-06 10:26:00.769876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.523 [2024-11-06 10:26:00.769883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.523 [2024-11-06 10:26:00.769898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.523 qpair failed and we were unable to recover it. 00:33:57.523 [2024-11-06 10:26:00.779883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.523 [2024-11-06 10:26:00.779976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.523 [2024-11-06 10:26:00.779991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.523 [2024-11-06 10:26:00.779999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.523 [2024-11-06 10:26:00.780010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.523 [2024-11-06 10:26:00.780024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.523 qpair failed and we were unable to recover it. 00:33:57.523 [2024-11-06 10:26:00.789925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.523 [2024-11-06 10:26:00.789980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.523 [2024-11-06 10:26:00.789994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.523 [2024-11-06 10:26:00.790001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.523 [2024-11-06 10:26:00.790008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.523 [2024-11-06 10:26:00.790022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.523 qpair failed and we were unable to recover it. 00:33:57.523 [2024-11-06 10:26:00.799813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.799885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.799900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.799907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.799914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.799928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.810001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.810060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.810074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.810081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.810088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.810102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.819965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.820024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.820037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.820044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.820051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.820065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.830030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.830084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.830098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.830105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.830112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.830126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.840003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.840096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.840109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.840117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.840124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.840139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.850134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.850207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.850221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.850229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.850235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.850250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.860101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.860192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.860205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.860213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.860220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.860233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.870017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.870076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.870093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.870101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.870108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.870121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.880180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.880234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.880248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.880256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.880263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.880277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.890177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.890236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.890250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.890258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.890264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.890279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.900227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.900277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.900291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.900298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.900304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.900318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.910132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.910188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.910202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.910210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.910219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.910233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.920298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.524 [2024-11-06 10:26:00.920352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.524 [2024-11-06 10:26:00.920366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.524 [2024-11-06 10:26:00.920374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.524 [2024-11-06 10:26:00.920380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.524 [2024-11-06 10:26:00.920394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.524 qpair failed and we were unable to recover it. 00:33:57.524 [2024-11-06 10:26:00.930192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.525 [2024-11-06 10:26:00.930268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.525 [2024-11-06 10:26:00.930282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.525 [2024-11-06 10:26:00.930290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.525 [2024-11-06 10:26:00.930296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.525 [2024-11-06 10:26:00.930310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.525 qpair failed and we were unable to recover it. 00:33:57.525 [2024-11-06 10:26:00.940336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.525 [2024-11-06 10:26:00.940391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.525 [2024-11-06 10:26:00.940404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.525 [2024-11-06 10:26:00.940412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.525 [2024-11-06 10:26:00.940419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.525 [2024-11-06 10:26:00.940432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.525 qpair failed and we were unable to recover it. 00:33:57.525 [2024-11-06 10:26:00.950331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.525 [2024-11-06 10:26:00.950382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.525 [2024-11-06 10:26:00.950396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.525 [2024-11-06 10:26:00.950404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.525 [2024-11-06 10:26:00.950410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.525 [2024-11-06 10:26:00.950424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.525 qpair failed and we were unable to recover it. 00:33:57.525 [2024-11-06 10:26:00.960400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.525 [2024-11-06 10:26:00.960490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.525 [2024-11-06 10:26:00.960503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.525 [2024-11-06 10:26:00.960511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.525 [2024-11-06 10:26:00.960518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.525 [2024-11-06 10:26:00.960532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.525 qpair failed and we were unable to recover it. 00:33:57.525 [2024-11-06 10:26:00.970436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.525 [2024-11-06 10:26:00.970527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.525 [2024-11-06 10:26:00.970541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.525 [2024-11-06 10:26:00.970549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.525 [2024-11-06 10:26:00.970555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.525 [2024-11-06 10:26:00.970569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.525 qpair failed and we were unable to recover it. 00:33:57.525 [2024-11-06 10:26:00.980453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.525 [2024-11-06 10:26:00.980504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.525 [2024-11-06 10:26:00.980518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.525 [2024-11-06 10:26:00.980526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.525 [2024-11-06 10:26:00.980532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.525 [2024-11-06 10:26:00.980546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.525 qpair failed and we were unable to recover it. 00:33:57.525 [2024-11-06 10:26:00.990445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.525 [2024-11-06 10:26:00.990501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.525 [2024-11-06 10:26:00.990514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.525 [2024-11-06 10:26:00.990522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.525 [2024-11-06 10:26:00.990528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.525 [2024-11-06 10:26:00.990542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.525 qpair failed and we were unable to recover it. 00:33:57.525 [2024-11-06 10:26:01.000516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.525 [2024-11-06 10:26:01.000573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.525 [2024-11-06 10:26:01.000593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.525 [2024-11-06 10:26:01.000601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.525 [2024-11-06 10:26:01.000609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.525 [2024-11-06 10:26:01.000623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.525 qpair failed and we were unable to recover it. 00:33:57.525 [2024-11-06 10:26:01.010451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.525 [2024-11-06 10:26:01.010505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.525 [2024-11-06 10:26:01.010519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.525 [2024-11-06 10:26:01.010526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.525 [2024-11-06 10:26:01.010533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.525 [2024-11-06 10:26:01.010547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.525 qpair failed and we were unable to recover it. 00:33:57.525 [2024-11-06 10:26:01.020535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.525 [2024-11-06 10:26:01.020582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.525 [2024-11-06 10:26:01.020596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.525 [2024-11-06 10:26:01.020603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.525 [2024-11-06 10:26:01.020610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.525 [2024-11-06 10:26:01.020624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.525 qpair failed and we were unable to recover it. 00:33:57.788 [2024-11-06 10:26:01.030636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.788 [2024-11-06 10:26:01.030715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.788 [2024-11-06 10:26:01.030740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.788 [2024-11-06 10:26:01.030749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.788 [2024-11-06 10:26:01.030757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.788 [2024-11-06 10:26:01.030777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.788 qpair failed and we were unable to recover it. 00:33:57.788 [2024-11-06 10:26:01.040521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.788 [2024-11-06 10:26:01.040616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.788 [2024-11-06 10:26:01.040633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.788 [2024-11-06 10:26:01.040641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.788 [2024-11-06 10:26:01.040653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.788 [2024-11-06 10:26:01.040668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.788 qpair failed and we were unable to recover it. 00:33:57.788 [2024-11-06 10:26:01.050657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.788 [2024-11-06 10:26:01.050750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.788 [2024-11-06 10:26:01.050764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.788 [2024-11-06 10:26:01.050772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.788 [2024-11-06 10:26:01.050779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.788 [2024-11-06 10:26:01.050793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.788 qpair failed and we were unable to recover it. 00:33:57.788 [2024-11-06 10:26:01.060611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.788 [2024-11-06 10:26:01.060661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.788 [2024-11-06 10:26:01.060674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.788 [2024-11-06 10:26:01.060682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.788 [2024-11-06 10:26:01.060689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.788 [2024-11-06 10:26:01.060703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.788 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.070658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.070707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.070721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.070728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.070735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.070750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.080714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.080772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.080785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.080792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.080799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.080813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.090764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.090818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.090832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.090840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.090846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.090860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.100732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.100790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.100804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.100811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.100818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.100832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.110756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.110804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.110818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.110826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.110832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.110846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.120834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.120897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.120911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.120919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.120926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.120940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.130875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.130946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.130963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.130970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.130977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.130992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.140713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.140761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.140775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.140783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.140789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.140803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.150907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.150959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.150972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.150980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.150987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.151001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.160941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.160993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.161007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.161015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.161021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.161035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.170960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.171014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.171028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.171036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.171045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.171060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.180957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.181001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.181015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.181022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.181029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.181043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-11-06 10:26:01.191004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-11-06 10:26:01.191055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-11-06 10:26:01.191069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-11-06 10:26:01.191077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-11-06 10:26:01.191083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.789 [2024-11-06 10:26:01.191097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-11-06 10:26:01.201057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-11-06 10:26:01.201116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-11-06 10:26:01.201130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-11-06 10:26:01.201138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-11-06 10:26:01.201144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.790 [2024-11-06 10:26:01.201158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-11-06 10:26:01.210980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-11-06 10:26:01.211083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-11-06 10:26:01.211097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-11-06 10:26:01.211104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-11-06 10:26:01.211110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.790 [2024-11-06 10:26:01.211125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-11-06 10:26:01.221076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-11-06 10:26:01.221171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-11-06 10:26:01.221185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-11-06 10:26:01.221193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-11-06 10:26:01.221199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.790 [2024-11-06 10:26:01.221213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-11-06 10:26:01.231108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-11-06 10:26:01.231160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-11-06 10:26:01.231174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-11-06 10:26:01.231181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-11-06 10:26:01.231188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.790 [2024-11-06 10:26:01.231202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-11-06 10:26:01.241180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-11-06 10:26:01.241234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-11-06 10:26:01.241247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-11-06 10:26:01.241255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-11-06 10:26:01.241261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.790 [2024-11-06 10:26:01.241275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-11-06 10:26:01.251227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-11-06 10:26:01.251313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-11-06 10:26:01.251327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-11-06 10:26:01.251335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-11-06 10:26:01.251342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.790 [2024-11-06 10:26:01.251356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-11-06 10:26:01.261175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-11-06 10:26:01.261234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-11-06 10:26:01.261251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-11-06 10:26:01.261259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-11-06 10:26:01.261266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.790 [2024-11-06 10:26:01.261280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-11-06 10:26:01.271204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-11-06 10:26:01.271273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-11-06 10:26:01.271287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-11-06 10:26:01.271294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-11-06 10:26:01.271301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.790 [2024-11-06 10:26:01.271314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-11-06 10:26:01.281199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-11-06 10:26:01.281252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-11-06 10:26:01.281267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-11-06 10:26:01.281275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-11-06 10:26:01.281281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:57.790 [2024-11-06 10:26:01.281296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.790 qpair failed and we were unable to recover it. 00:33:58.053 [2024-11-06 10:26:01.291197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.053 [2024-11-06 10:26:01.291253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.053 [2024-11-06 10:26:01.291268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.053 [2024-11-06 10:26:01.291275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.053 [2024-11-06 10:26:01.291282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.053 [2024-11-06 10:26:01.291296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.053 qpair failed and we were unable to recover it. 00:33:58.053 [2024-11-06 10:26:01.301297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.053 [2024-11-06 10:26:01.301383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.053 [2024-11-06 10:26:01.301397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.053 [2024-11-06 10:26:01.301408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.053 [2024-11-06 10:26:01.301415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.053 [2024-11-06 10:26:01.301429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.053 qpair failed and we were unable to recover it. 00:33:58.053 [2024-11-06 10:26:01.311325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.053 [2024-11-06 10:26:01.311375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.053 [2024-11-06 10:26:01.311388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.053 [2024-11-06 10:26:01.311395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.053 [2024-11-06 10:26:01.311402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.053 [2024-11-06 10:26:01.311416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.053 qpair failed and we were unable to recover it. 00:33:58.053 [2024-11-06 10:26:01.321396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.053 [2024-11-06 10:26:01.321454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.053 [2024-11-06 10:26:01.321468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.053 [2024-11-06 10:26:01.321475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.053 [2024-11-06 10:26:01.321481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.053 [2024-11-06 10:26:01.321495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.053 qpair failed and we were unable to recover it. 00:33:58.053 [2024-11-06 10:26:01.331400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.053 [2024-11-06 10:26:01.331455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.053 [2024-11-06 10:26:01.331469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.053 [2024-11-06 10:26:01.331476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.053 [2024-11-06 10:26:01.331483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.053 [2024-11-06 10:26:01.331496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.053 qpair failed and we were unable to recover it. 00:33:58.053 [2024-11-06 10:26:01.341423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.053 [2024-11-06 10:26:01.341472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.053 [2024-11-06 10:26:01.341486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.053 [2024-11-06 10:26:01.341493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.053 [2024-11-06 10:26:01.341500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.053 [2024-11-06 10:26:01.341513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.053 qpair failed and we were unable to recover it. 00:33:58.053 [2024-11-06 10:26:01.351447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.053 [2024-11-06 10:26:01.351497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.053 [2024-11-06 10:26:01.351511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.053 [2024-11-06 10:26:01.351518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.053 [2024-11-06 10:26:01.351525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.053 [2024-11-06 10:26:01.351538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.053 qpair failed and we were unable to recover it. 00:33:58.053 [2024-11-06 10:26:01.361473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.053 [2024-11-06 10:26:01.361543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.053 [2024-11-06 10:26:01.361557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.053 [2024-11-06 10:26:01.361564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.053 [2024-11-06 10:26:01.361571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.053 [2024-11-06 10:26:01.361585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.053 qpair failed and we were unable to recover it. 00:33:58.053 [2024-11-06 10:26:01.371570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.053 [2024-11-06 10:26:01.371623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.053 [2024-11-06 10:26:01.371637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.053 [2024-11-06 10:26:01.371645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.053 [2024-11-06 10:26:01.371652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.053 [2024-11-06 10:26:01.371665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.053 qpair failed and we were unable to recover it. 00:33:58.053 [2024-11-06 10:26:01.381552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.053 [2024-11-06 10:26:01.381609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.053 [2024-11-06 10:26:01.381634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.053 [2024-11-06 10:26:01.381643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.053 [2024-11-06 10:26:01.381650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.053 [2024-11-06 10:26:01.381669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.053 qpair failed and we were unable to recover it. 00:33:58.053 [2024-11-06 10:26:01.391490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.053 [2024-11-06 10:26:01.391542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.053 [2024-11-06 10:26:01.391571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.391580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.391588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.391608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.401486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.401544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.401559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.401567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.401574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.401588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.411514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.411570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.411584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.411592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.411598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.411613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.421631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.421685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.421710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.421719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.421726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.421746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.431684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.431731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.431746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.431758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.431766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.431782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.441732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.441816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.441831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.441839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.441846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.441860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.451764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.451818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.451832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.451839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.451846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.451861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.461743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.461788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.461802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.461809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.461815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.461829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.471761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.471810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.471823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.471831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.471837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.471851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.481791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.481888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.481905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.481913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.481920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.481935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.491820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.491878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.491892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.491900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.491906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.491921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.501851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.501925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.501939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.501946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.501953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.054 [2024-11-06 10:26:01.501967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.054 qpair failed and we were unable to recover it. 00:33:58.054 [2024-11-06 10:26:01.511876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.054 [2024-11-06 10:26:01.511924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.054 [2024-11-06 10:26:01.511938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.054 [2024-11-06 10:26:01.511947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.054 [2024-11-06 10:26:01.511954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.055 [2024-11-06 10:26:01.511969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.055 qpair failed and we were unable to recover it. 00:33:58.055 [2024-11-06 10:26:01.521846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.055 [2024-11-06 10:26:01.521907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.055 [2024-11-06 10:26:01.521925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.055 [2024-11-06 10:26:01.521932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.055 [2024-11-06 10:26:01.521939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.055 [2024-11-06 10:26:01.521953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.055 qpair failed and we were unable to recover it. 00:33:58.055 [2024-11-06 10:26:01.531975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.055 [2024-11-06 10:26:01.532030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.055 [2024-11-06 10:26:01.532044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.055 [2024-11-06 10:26:01.532051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.055 [2024-11-06 10:26:01.532058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.055 [2024-11-06 10:26:01.532071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.055 qpair failed and we were unable to recover it. 00:33:58.055 [2024-11-06 10:26:01.541941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.055 [2024-11-06 10:26:01.541993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.055 [2024-11-06 10:26:01.542006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.055 [2024-11-06 10:26:01.542013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.055 [2024-11-06 10:26:01.542020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.055 [2024-11-06 10:26:01.542034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.055 qpair failed and we were unable to recover it. 00:33:58.055 [2024-11-06 10:26:01.551955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.055 [2024-11-06 10:26:01.552005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.055 [2024-11-06 10:26:01.552018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.055 [2024-11-06 10:26:01.552025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.055 [2024-11-06 10:26:01.552032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.055 [2024-11-06 10:26:01.552045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.055 qpair failed and we were unable to recover it. 00:33:58.318 [2024-11-06 10:26:01.561959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.318 [2024-11-06 10:26:01.562027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.318 [2024-11-06 10:26:01.562041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.318 [2024-11-06 10:26:01.562052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.318 [2024-11-06 10:26:01.562059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.562074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.572084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.572137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.572150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.572157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.572164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.572177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.582036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.582113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.582127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.582134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.582142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.582155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.592097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.592147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.592161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.592169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.592176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.592189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.602157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.602213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.602226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.602234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.602241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.602254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.612203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.612258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.612272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.612279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.612286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.612300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.622162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.622207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.622221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.622228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.622235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.622249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.632166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.632217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.632231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.632238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.632245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.632258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.642278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.642334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.642348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.642355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.642361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.642376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.652295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.652354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.652368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.652376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.652382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.652396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.662341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.662419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.662433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.662440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.662447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.662460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.672300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.672349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.672363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.672370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.672377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.672390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.682360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.682417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.682431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.682438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.682445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.319 [2024-11-06 10:26:01.682459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-11-06 10:26:01.692270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.319 [2024-11-06 10:26:01.692344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.319 [2024-11-06 10:26:01.692358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.319 [2024-11-06 10:26:01.692373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.319 [2024-11-06 10:26:01.692380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.692395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.702349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.702396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.702410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.702417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.702424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.702438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.712409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.712462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.712475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.712483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.712490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.712503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.722478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.722534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.722548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.722555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.722562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.722575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.732518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.732587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.732601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.732609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.732615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.732628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.742473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.742528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.742542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.742550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.742556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.742570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.752501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.752549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.752563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.752570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.752577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.752591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.762587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.762647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.762661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.762668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.762675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.762689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.772636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.772725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.772750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.772760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.772767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.772786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.782517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.782570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.782587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.782595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.782602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.782618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.792497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.792552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.792568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.792575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.792583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.792600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.802693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.802750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.802765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.802772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.802779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.802793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-11-06 10:26:01.812741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.320 [2024-11-06 10:26:01.812830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.320 [2024-11-06 10:26:01.812844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.320 [2024-11-06 10:26:01.812852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.320 [2024-11-06 10:26:01.812858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.320 [2024-11-06 10:26:01.812876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.582 [2024-11-06 10:26:01.822728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.582 [2024-11-06 10:26:01.822823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.582 [2024-11-06 10:26:01.822837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.582 [2024-11-06 10:26:01.822849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.582 [2024-11-06 10:26:01.822856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.582 [2024-11-06 10:26:01.822875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.582 qpair failed and we were unable to recover it. 00:33:58.582 [2024-11-06 10:26:01.832737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.582 [2024-11-06 10:26:01.832780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.582 [2024-11-06 10:26:01.832794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.582 [2024-11-06 10:26:01.832802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.582 [2024-11-06 10:26:01.832809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.582 [2024-11-06 10:26:01.832824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.582 qpair failed and we were unable to recover it. 00:33:58.582 [2024-11-06 10:26:01.842814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.842927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.842941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.842949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.842956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.842970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.852851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.852913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.852927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.852935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.852942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.852956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.862827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.862878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.862892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.862899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.862907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.862921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.872891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.872940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.872954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.872962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.872969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.872983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.882923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.882983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.882997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.883005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.883011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.883025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.892957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.893013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.893027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.893034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.893041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.893055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.902914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.902962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.902976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.902984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.902990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.903004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.912966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.913026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.913040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.913047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.913053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.913067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.923026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.923082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.923096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.923103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.923110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.923124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.933030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.933086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.933100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.933107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.933114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.933128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.943034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.943080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.943094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.943101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.943108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.943122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.952926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.953002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.953016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.953027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.953034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.953048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.963144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.583 [2024-11-06 10:26:01.963201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.583 [2024-11-06 10:26:01.963214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.583 [2024-11-06 10:26:01.963222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.583 [2024-11-06 10:26:01.963228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.583 [2024-11-06 10:26:01.963242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.583 qpair failed and we were unable to recover it. 00:33:58.583 [2024-11-06 10:26:01.973104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.584 [2024-11-06 10:26:01.973156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.584 [2024-11-06 10:26:01.973169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.584 [2024-11-06 10:26:01.973177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.584 [2024-11-06 10:26:01.973183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.584 [2024-11-06 10:26:01.973196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.584 qpair failed and we were unable to recover it. 00:33:58.584 [2024-11-06 10:26:01.983134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.584 [2024-11-06 10:26:01.983181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.584 [2024-11-06 10:26:01.983195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.584 [2024-11-06 10:26:01.983202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.584 [2024-11-06 10:26:01.983208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.584 [2024-11-06 10:26:01.983222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.584 qpair failed and we were unable to recover it. 00:33:58.584 [2024-11-06 10:26:01.993105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.584 [2024-11-06 10:26:01.993205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.584 [2024-11-06 10:26:01.993218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.584 [2024-11-06 10:26:01.993226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.584 [2024-11-06 10:26:01.993232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.584 [2024-11-06 10:26:01.993250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.584 qpair failed and we were unable to recover it. 00:33:58.584 [2024-11-06 10:26:02.003243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.584 [2024-11-06 10:26:02.003332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.584 [2024-11-06 10:26:02.003345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.584 [2024-11-06 10:26:02.003353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.584 [2024-11-06 10:26:02.003360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.584 [2024-11-06 10:26:02.003374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.584 qpair failed and we were unable to recover it. 00:33:58.584 [2024-11-06 10:26:02.013218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.584 [2024-11-06 10:26:02.013271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.584 [2024-11-06 10:26:02.013285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.584 [2024-11-06 10:26:02.013293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.584 [2024-11-06 10:26:02.013300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.584 [2024-11-06 10:26:02.013314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.584 qpair failed and we were unable to recover it. 00:33:58.584 [2024-11-06 10:26:02.023131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.584 [2024-11-06 10:26:02.023174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.584 [2024-11-06 10:26:02.023188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.584 [2024-11-06 10:26:02.023195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.584 [2024-11-06 10:26:02.023202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.584 [2024-11-06 10:26:02.023215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.584 qpair failed and we were unable to recover it. 00:33:58.584 [2024-11-06 10:26:02.033288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.584 [2024-11-06 10:26:02.033338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.584 [2024-11-06 10:26:02.033351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.584 [2024-11-06 10:26:02.033359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.584 [2024-11-06 10:26:02.033365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.584 [2024-11-06 10:26:02.033379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.584 qpair failed and we were unable to recover it. 00:33:58.584 [2024-11-06 10:26:02.043331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.584 [2024-11-06 10:26:02.043401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.584 [2024-11-06 10:26:02.043415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.584 [2024-11-06 10:26:02.043424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.584 [2024-11-06 10:26:02.043432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.584 [2024-11-06 10:26:02.043446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.584 qpair failed and we were unable to recover it. 00:33:58.584 [2024-11-06 10:26:02.053357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.584 [2024-11-06 10:26:02.053407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.584 [2024-11-06 10:26:02.053420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.584 [2024-11-06 10:26:02.053428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.584 [2024-11-06 10:26:02.053435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.584 [2024-11-06 10:26:02.053449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.584 qpair failed and we were unable to recover it. 00:33:58.584 [2024-11-06 10:26:02.063365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.584 [2024-11-06 10:26:02.063422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.584 [2024-11-06 10:26:02.063435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.584 [2024-11-06 10:26:02.063442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.584 [2024-11-06 10:26:02.063449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.584 [2024-11-06 10:26:02.063463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.584 qpair failed and we were unable to recover it. 00:33:58.584 [2024-11-06 10:26:02.073315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.584 [2024-11-06 10:26:02.073368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.584 [2024-11-06 10:26:02.073381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.584 [2024-11-06 10:26:02.073389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.584 [2024-11-06 10:26:02.073395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.584 [2024-11-06 10:26:02.073409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.584 qpair failed and we were unable to recover it. 00:33:58.847 [2024-11-06 10:26:02.083496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.847 [2024-11-06 10:26:02.083566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.847 [2024-11-06 10:26:02.083580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.847 [2024-11-06 10:26:02.083592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.847 [2024-11-06 10:26:02.083599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.847 [2024-11-06 10:26:02.083613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.847 qpair failed and we were unable to recover it. 00:33:58.847 [2024-11-06 10:26:02.093456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.847 [2024-11-06 10:26:02.093509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.847 [2024-11-06 10:26:02.093524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.847 [2024-11-06 10:26:02.093532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.847 [2024-11-06 10:26:02.093539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.847 [2024-11-06 10:26:02.093553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.847 qpair failed and we were unable to recover it. 00:33:58.847 [2024-11-06 10:26:02.103463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.847 [2024-11-06 10:26:02.103515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.847 [2024-11-06 10:26:02.103529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.847 [2024-11-06 10:26:02.103537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.847 [2024-11-06 10:26:02.103543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.847 [2024-11-06 10:26:02.103558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.847 qpair failed and we were unable to recover it. 00:33:58.847 [2024-11-06 10:26:02.113478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.847 [2024-11-06 10:26:02.113529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.847 [2024-11-06 10:26:02.113543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.847 [2024-11-06 10:26:02.113550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.847 [2024-11-06 10:26:02.113557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.847 [2024-11-06 10:26:02.113570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.847 qpair failed and we were unable to recover it. 00:33:58.847 [2024-11-06 10:26:02.123446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.847 [2024-11-06 10:26:02.123507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.847 [2024-11-06 10:26:02.123521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.847 [2024-11-06 10:26:02.123529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.847 [2024-11-06 10:26:02.123535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.847 [2024-11-06 10:26:02.123553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.847 qpair failed and we were unable to recover it. 00:33:58.847 [2024-11-06 10:26:02.133580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.847 [2024-11-06 10:26:02.133683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.847 [2024-11-06 10:26:02.133708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.847 [2024-11-06 10:26:02.133718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.847 [2024-11-06 10:26:02.133726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.847 [2024-11-06 10:26:02.133746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.847 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.143471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.143522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.143539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.143547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.143553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.143569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.153623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.153673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.153687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.153695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.153701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.153716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.163687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.163761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.163775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.163782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.163788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.163804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.173683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.173738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.173752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.173760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.173766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.173781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.183740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.183826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.183841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.183849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.183856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.183874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.193699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.193747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.193762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.193769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.193775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.193790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.203799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.203854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.203872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.203880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.203887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.203901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.213794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.213848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.213866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.213879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.213885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.213900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.223816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.223877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.223891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.223898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.223905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.223919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.233853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.233951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.233965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.233973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.233980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.233994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.243928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.244004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.244018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.244025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.244032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.244046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.253914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.253972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.253987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.253995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.254002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.254024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.848 qpair failed and we were unable to recover it. 00:33:58.848 [2024-11-06 10:26:02.263966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.848 [2024-11-06 10:26:02.264016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.848 [2024-11-06 10:26:02.264031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.848 [2024-11-06 10:26:02.264039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.848 [2024-11-06 10:26:02.264046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.848 [2024-11-06 10:26:02.264060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.849 qpair failed and we were unable to recover it. 00:33:58.849 [2024-11-06 10:26:02.273959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.849 [2024-11-06 10:26:02.274009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.849 [2024-11-06 10:26:02.274023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.849 [2024-11-06 10:26:02.274030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.849 [2024-11-06 10:26:02.274037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.849 [2024-11-06 10:26:02.274051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.849 qpair failed and we were unable to recover it. 00:33:58.849 [2024-11-06 10:26:02.284026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.849 [2024-11-06 10:26:02.284084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.849 [2024-11-06 10:26:02.284098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.849 [2024-11-06 10:26:02.284105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.849 [2024-11-06 10:26:02.284112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.849 [2024-11-06 10:26:02.284126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.849 qpair failed and we were unable to recover it. 00:33:58.849 [2024-11-06 10:26:02.294012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.849 [2024-11-06 10:26:02.294073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.849 [2024-11-06 10:26:02.294087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.849 [2024-11-06 10:26:02.294095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.849 [2024-11-06 10:26:02.294101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.849 [2024-11-06 10:26:02.294116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.849 qpair failed and we were unable to recover it. 00:33:58.849 [2024-11-06 10:26:02.304011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.849 [2024-11-06 10:26:02.304061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.849 [2024-11-06 10:26:02.304075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.849 [2024-11-06 10:26:02.304083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.849 [2024-11-06 10:26:02.304089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.849 [2024-11-06 10:26:02.304103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.849 qpair failed and we were unable to recover it. 00:33:58.849 [2024-11-06 10:26:02.314073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.849 [2024-11-06 10:26:02.314117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.849 [2024-11-06 10:26:02.314130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.849 [2024-11-06 10:26:02.314138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.849 [2024-11-06 10:26:02.314144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.849 [2024-11-06 10:26:02.314158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.849 qpair failed and we were unable to recover it. 00:33:58.849 [2024-11-06 10:26:02.324156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.849 [2024-11-06 10:26:02.324209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.849 [2024-11-06 10:26:02.324222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.849 [2024-11-06 10:26:02.324230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.849 [2024-11-06 10:26:02.324236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.849 [2024-11-06 10:26:02.324250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.849 qpair failed and we were unable to recover it. 00:33:58.849 [2024-11-06 10:26:02.334150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.849 [2024-11-06 10:26:02.334249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.849 [2024-11-06 10:26:02.334263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.849 [2024-11-06 10:26:02.334271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.849 [2024-11-06 10:26:02.334277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.849 [2024-11-06 10:26:02.334291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.849 qpair failed and we were unable to recover it. 00:33:58.849 [2024-11-06 10:26:02.344116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.849 [2024-11-06 10:26:02.344163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.849 [2024-11-06 10:26:02.344176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.849 [2024-11-06 10:26:02.344188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.849 [2024-11-06 10:26:02.344194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:58.849 [2024-11-06 10:26:02.344208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:58.849 qpair failed and we were unable to recover it. 00:33:59.112 [2024-11-06 10:26:02.354169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.112 [2024-11-06 10:26:02.354216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.112 [2024-11-06 10:26:02.354230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.112 [2024-11-06 10:26:02.354237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.112 [2024-11-06 10:26:02.354244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.112 [2024-11-06 10:26:02.354258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.112 qpair failed and we were unable to recover it. 00:33:59.112 [2024-11-06 10:26:02.364215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.112 [2024-11-06 10:26:02.364272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.112 [2024-11-06 10:26:02.364285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.112 [2024-11-06 10:26:02.364292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.112 [2024-11-06 10:26:02.364299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.112 [2024-11-06 10:26:02.364312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.112 qpair failed and we were unable to recover it. 00:33:59.112 [2024-11-06 10:26:02.374229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.112 [2024-11-06 10:26:02.374284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.112 [2024-11-06 10:26:02.374297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.112 [2024-11-06 10:26:02.374305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.112 [2024-11-06 10:26:02.374311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.112 [2024-11-06 10:26:02.374325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.112 qpair failed and we were unable to recover it. 00:33:59.112 [2024-11-06 10:26:02.384247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.112 [2024-11-06 10:26:02.384298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.112 [2024-11-06 10:26:02.384311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.112 [2024-11-06 10:26:02.384318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.112 [2024-11-06 10:26:02.384325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.112 [2024-11-06 10:26:02.384342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.112 qpair failed and we were unable to recover it. 00:33:59.112 [2024-11-06 10:26:02.394272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.112 [2024-11-06 10:26:02.394320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.112 [2024-11-06 10:26:02.394334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.112 [2024-11-06 10:26:02.394341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.112 [2024-11-06 10:26:02.394348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.112 [2024-11-06 10:26:02.394362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.112 qpair failed and we were unable to recover it. 00:33:59.112 [2024-11-06 10:26:02.404336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.112 [2024-11-06 10:26:02.404396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.112 [2024-11-06 10:26:02.404410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.112 [2024-11-06 10:26:02.404417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.112 [2024-11-06 10:26:02.404424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.112 [2024-11-06 10:26:02.404438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.112 qpair failed and we were unable to recover it. 00:33:59.112 [2024-11-06 10:26:02.414207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.112 [2024-11-06 10:26:02.414264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.112 [2024-11-06 10:26:02.414277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.112 [2024-11-06 10:26:02.414285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.112 [2024-11-06 10:26:02.414292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.112 [2024-11-06 10:26:02.414306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.112 qpair failed and we were unable to recover it. 00:33:59.112 [2024-11-06 10:26:02.424340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.112 [2024-11-06 10:26:02.424385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.112 [2024-11-06 10:26:02.424398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.112 [2024-11-06 10:26:02.424405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.112 [2024-11-06 10:26:02.424412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.424426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.434353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.434407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.434420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.434428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.434434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.434448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.444425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.444477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.444490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.444498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.444504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.444517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.454420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.454471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.454485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.454492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.454498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.454513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.464433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.464518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.464531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.464538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.464545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.464559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.474487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.474535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.474548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.474565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.474571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.474585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.484579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.484638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.484657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.484665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.484672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.484688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.494526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.494584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.494609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.494618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.494625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.494646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.504598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.504698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.504724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.504733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.504740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.504761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.514597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.514649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.514665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.514673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.514680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.514699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.524532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.524597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.524612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.524619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.524626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.524640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.534648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.534705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.534721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.534729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.534735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.534750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.544683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.544730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.544744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.113 [2024-11-06 10:26:02.544752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.113 [2024-11-06 10:26:02.544758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.113 [2024-11-06 10:26:02.544773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.113 qpair failed and we were unable to recover it. 00:33:59.113 [2024-11-06 10:26:02.554671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.113 [2024-11-06 10:26:02.554720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.113 [2024-11-06 10:26:02.554734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.114 [2024-11-06 10:26:02.554741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.114 [2024-11-06 10:26:02.554747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.114 [2024-11-06 10:26:02.554761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.114 qpair failed and we were unable to recover it. 00:33:59.114 [2024-11-06 10:26:02.564769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.114 [2024-11-06 10:26:02.564828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.114 [2024-11-06 10:26:02.564842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.114 [2024-11-06 10:26:02.564849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.114 [2024-11-06 10:26:02.564856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.114 [2024-11-06 10:26:02.564873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.114 qpair failed and we were unable to recover it. 00:33:59.114 [2024-11-06 10:26:02.574730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.114 [2024-11-06 10:26:02.574781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.114 [2024-11-06 10:26:02.574795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.114 [2024-11-06 10:26:02.574802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.114 [2024-11-06 10:26:02.574809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.114 [2024-11-06 10:26:02.574823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.114 qpair failed and we were unable to recover it. 00:33:59.114 [2024-11-06 10:26:02.584695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.114 [2024-11-06 10:26:02.584747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.114 [2024-11-06 10:26:02.584761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.114 [2024-11-06 10:26:02.584768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.114 [2024-11-06 10:26:02.584775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.114 [2024-11-06 10:26:02.584789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.114 qpair failed and we were unable to recover it. 00:33:59.114 [2024-11-06 10:26:02.594796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.114 [2024-11-06 10:26:02.594845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.114 [2024-11-06 10:26:02.594859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.114 [2024-11-06 10:26:02.594872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.114 [2024-11-06 10:26:02.594879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.114 [2024-11-06 10:26:02.594894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.114 qpair failed and we were unable to recover it. 00:33:59.114 [2024-11-06 10:26:02.604875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.114 [2024-11-06 10:26:02.604934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.114 [2024-11-06 10:26:02.604948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.114 [2024-11-06 10:26:02.604959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.114 [2024-11-06 10:26:02.604966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.114 [2024-11-06 10:26:02.604980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.114 qpair failed and we were unable to recover it. 00:33:59.377 [2024-11-06 10:26:02.614867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.377 [2024-11-06 10:26:02.614921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.377 [2024-11-06 10:26:02.614935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.377 [2024-11-06 10:26:02.614943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.377 [2024-11-06 10:26:02.614949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.377 [2024-11-06 10:26:02.614963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.377 qpair failed and we were unable to recover it. 00:33:59.377 [2024-11-06 10:26:02.624880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.377 [2024-11-06 10:26:02.624927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.377 [2024-11-06 10:26:02.624940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.377 [2024-11-06 10:26:02.624948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.377 [2024-11-06 10:26:02.624954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.377 [2024-11-06 10:26:02.624968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.377 qpair failed and we were unable to recover it. 00:33:59.377 [2024-11-06 10:26:02.634780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.377 [2024-11-06 10:26:02.634826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.377 [2024-11-06 10:26:02.634840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.377 [2024-11-06 10:26:02.634847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.377 [2024-11-06 10:26:02.634853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.377 [2024-11-06 10:26:02.634871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.377 qpair failed and we were unable to recover it. 00:33:59.377 [2024-11-06 10:26:02.645014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.377 [2024-11-06 10:26:02.645069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.377 [2024-11-06 10:26:02.645083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.377 [2024-11-06 10:26:02.645090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.377 [2024-11-06 10:26:02.645097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.377 [2024-11-06 10:26:02.645119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.377 qpair failed and we were unable to recover it. 00:33:59.377 [2024-11-06 10:26:02.654969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.377 [2024-11-06 10:26:02.655023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.377 [2024-11-06 10:26:02.655036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.377 [2024-11-06 10:26:02.655044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.377 [2024-11-06 10:26:02.655050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.377 [2024-11-06 10:26:02.655064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.377 qpair failed and we were unable to recover it. 00:33:59.377 [2024-11-06 10:26:02.665005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.377 [2024-11-06 10:26:02.665057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.377 [2024-11-06 10:26:02.665071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.377 [2024-11-06 10:26:02.665078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.377 [2024-11-06 10:26:02.665085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.377 [2024-11-06 10:26:02.665099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.377 qpair failed and we were unable to recover it. 00:33:59.377 [2024-11-06 10:26:02.674919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.377 [2024-11-06 10:26:02.674966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.377 [2024-11-06 10:26:02.674980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.377 [2024-11-06 10:26:02.674987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.377 [2024-11-06 10:26:02.674994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.377 [2024-11-06 10:26:02.675008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.377 qpair failed and we were unable to recover it. 00:33:59.377 [2024-11-06 10:26:02.685055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.377 [2024-11-06 10:26:02.685111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.377 [2024-11-06 10:26:02.685125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.377 [2024-11-06 10:26:02.685132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.377 [2024-11-06 10:26:02.685139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.377 [2024-11-06 10:26:02.685153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.377 qpair failed and we were unable to recover it. 00:33:59.377 [2024-11-06 10:26:02.695098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.377 [2024-11-06 10:26:02.695162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.377 [2024-11-06 10:26:02.695177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.377 [2024-11-06 10:26:02.695184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.377 [2024-11-06 10:26:02.695191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.377 [2024-11-06 10:26:02.695205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.377 qpair failed and we were unable to recover it. 00:33:59.377 [2024-11-06 10:26:02.705122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.377 [2024-11-06 10:26:02.705166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.377 [2024-11-06 10:26:02.705179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.377 [2024-11-06 10:26:02.705187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.377 [2024-11-06 10:26:02.705193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.377 [2024-11-06 10:26:02.705207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.714994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.715042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.715056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.715064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.715070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.715085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.725078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.725134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.725147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.725155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.725161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.725175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.735174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.735255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.735269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.735281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.735288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.735303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.745187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.745240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.745253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.745261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.745267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.745281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.755265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.755387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.755401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.755409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.755416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.755429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.765315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.765374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.765388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.765396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.765403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.765417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.775289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.775387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.775401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.775408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.775415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.775432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.785290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.785364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.785378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.785385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.785392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.785406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.795313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.795362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.795376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.795383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.795390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.795404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.805415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.805477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.805490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.805498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.805504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.805518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.815277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.815330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.815343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.815351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.815358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.815371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.825306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.825367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.825381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.825388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.825395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.825408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.835468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.378 [2024-11-06 10:26:02.835517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.378 [2024-11-06 10:26:02.835530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.378 [2024-11-06 10:26:02.835538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.378 [2024-11-06 10:26:02.835544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.378 [2024-11-06 10:26:02.835558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.378 qpair failed and we were unable to recover it. 00:33:59.378 [2024-11-06 10:26:02.845520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.379 [2024-11-06 10:26:02.845576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.379 [2024-11-06 10:26:02.845589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.379 [2024-11-06 10:26:02.845597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.379 [2024-11-06 10:26:02.845603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.379 [2024-11-06 10:26:02.845617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.379 qpair failed and we were unable to recover it. 00:33:59.379 [2024-11-06 10:26:02.855520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.379 [2024-11-06 10:26:02.855574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.379 [2024-11-06 10:26:02.855587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.379 [2024-11-06 10:26:02.855595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.379 [2024-11-06 10:26:02.855601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.379 [2024-11-06 10:26:02.855616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.379 qpair failed and we were unable to recover it. 00:33:59.379 [2024-11-06 10:26:02.865530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.379 [2024-11-06 10:26:02.865578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.379 [2024-11-06 10:26:02.865593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.379 [2024-11-06 10:26:02.865604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.379 [2024-11-06 10:26:02.865610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.379 [2024-11-06 10:26:02.865624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.379 qpair failed and we were unable to recover it. 00:33:59.379 [2024-11-06 10:26:02.875485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.379 [2024-11-06 10:26:02.875531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.379 [2024-11-06 10:26:02.875546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.379 [2024-11-06 10:26:02.875554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.379 [2024-11-06 10:26:02.875560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.379 [2024-11-06 10:26:02.875575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.379 qpair failed and we were unable to recover it. 00:33:59.642 [2024-11-06 10:26:02.885636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.642 [2024-11-06 10:26:02.885692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.642 [2024-11-06 10:26:02.885707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.642 [2024-11-06 10:26:02.885714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.642 [2024-11-06 10:26:02.885721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.642 [2024-11-06 10:26:02.885735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.642 qpair failed and we were unable to recover it. 00:33:59.642 [2024-11-06 10:26:02.895618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.642 [2024-11-06 10:26:02.895673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.642 [2024-11-06 10:26:02.895688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.642 [2024-11-06 10:26:02.895695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.642 [2024-11-06 10:26:02.895702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.642 [2024-11-06 10:26:02.895716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.642 qpair failed and we were unable to recover it. 00:33:59.642 [2024-11-06 10:26:02.905629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.642 [2024-11-06 10:26:02.905675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.642 [2024-11-06 10:26:02.905690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.642 [2024-11-06 10:26:02.905698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.642 [2024-11-06 10:26:02.905704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.642 [2024-11-06 10:26:02.905722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.642 qpair failed and we were unable to recover it. 00:33:59.642 [2024-11-06 10:26:02.915547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.642 [2024-11-06 10:26:02.915595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.642 [2024-11-06 10:26:02.915609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.642 [2024-11-06 10:26:02.915617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.642 [2024-11-06 10:26:02.915623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.642 [2024-11-06 10:26:02.915638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.642 qpair failed and we were unable to recover it. 00:33:59.642 [2024-11-06 10:26:02.925737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.642 [2024-11-06 10:26:02.925790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.642 [2024-11-06 10:26:02.925804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.642 [2024-11-06 10:26:02.925812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.642 [2024-11-06 10:26:02.925818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.642 [2024-11-06 10:26:02.925832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.642 qpair failed and we were unable to recover it. 00:33:59.642 [2024-11-06 10:26:02.935721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.642 [2024-11-06 10:26:02.935774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.642 [2024-11-06 10:26:02.935788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.642 [2024-11-06 10:26:02.935796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.642 [2024-11-06 10:26:02.935802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.642 [2024-11-06 10:26:02.935816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.642 qpair failed and we were unable to recover it. 00:33:59.642 [2024-11-06 10:26:02.945761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.642 [2024-11-06 10:26:02.945816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.642 [2024-11-06 10:26:02.945830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.642 [2024-11-06 10:26:02.945838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.642 [2024-11-06 10:26:02.945845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.642 [2024-11-06 10:26:02.945860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.642 qpair failed and we were unable to recover it. 00:33:59.642 [2024-11-06 10:26:02.955639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.642 [2024-11-06 10:26:02.955691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.642 [2024-11-06 10:26:02.955705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.642 [2024-11-06 10:26:02.955713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.642 [2024-11-06 10:26:02.955719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.642 [2024-11-06 10:26:02.955733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.642 qpair failed and we were unable to recover it. 00:33:59.642 [2024-11-06 10:26:02.965838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.642 [2024-11-06 10:26:02.965900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.642 [2024-11-06 10:26:02.965914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.642 [2024-11-06 10:26:02.965921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.642 [2024-11-06 10:26:02.965928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.642 [2024-11-06 10:26:02.965942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.642 qpair failed and we were unable to recover it. 00:33:59.642 [2024-11-06 10:26:02.975849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.642 [2024-11-06 10:26:02.975903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.642 [2024-11-06 10:26:02.975917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.642 [2024-11-06 10:26:02.975925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.642 [2024-11-06 10:26:02.975931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.642 [2024-11-06 10:26:02.975945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.642 qpair failed and we were unable to recover it. 00:33:59.642 [2024-11-06 10:26:02.985845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.642 [2024-11-06 10:26:02.985897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.642 [2024-11-06 10:26:02.985911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:02.985919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:02.985925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:02.985939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:02.995846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:02.995899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:02.995913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:02.995924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:02.995931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:02.995944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:03.005964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:03.006024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:03.006038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:03.006046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:03.006052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:03.006066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:03.015865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:03.015928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:03.015943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:03.015951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:03.015958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:03.015972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:03.025945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:03.026032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:03.026046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:03.026054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:03.026061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:03.026075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:03.036000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:03.036054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:03.036067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:03.036074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:03.036081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:03.036098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:03.046059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:03.046113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:03.046127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:03.046134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:03.046140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:03.046154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:03.056110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:03.056172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:03.056186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:03.056193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:03.056200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:03.056213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:03.065937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:03.065992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:03.066005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:03.066012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:03.066019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:03.066032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:03.076092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:03.076141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:03.076154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:03.076162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:03.076169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:03.076182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:03.086206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:03.086262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:03.086275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:03.086283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:03.086289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:03.086302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:03.096073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:03.096119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:03.096133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:03.096141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:03.096147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.643 [2024-11-06 10:26:03.096161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.643 qpair failed and we were unable to recover it. 00:33:59.643 [2024-11-06 10:26:03.106173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.643 [2024-11-06 10:26:03.106223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.643 [2024-11-06 10:26:03.106237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.643 [2024-11-06 10:26:03.106244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.643 [2024-11-06 10:26:03.106251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.644 [2024-11-06 10:26:03.106265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.644 qpair failed and we were unable to recover it. 00:33:59.644 [2024-11-06 10:26:03.116172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.644 [2024-11-06 10:26:03.116220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.644 [2024-11-06 10:26:03.116234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.644 [2024-11-06 10:26:03.116241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.644 [2024-11-06 10:26:03.116248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.644 [2024-11-06 10:26:03.116261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.644 qpair failed and we were unable to recover it. 00:33:59.644 [2024-11-06 10:26:03.126151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.644 [2024-11-06 10:26:03.126248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.644 [2024-11-06 10:26:03.126265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.644 [2024-11-06 10:26:03.126273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.644 [2024-11-06 10:26:03.126280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.644 [2024-11-06 10:26:03.126294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.644 qpair failed and we were unable to recover it. 00:33:59.644 [2024-11-06 10:26:03.136277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.644 [2024-11-06 10:26:03.136325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.644 [2024-11-06 10:26:03.136338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.644 [2024-11-06 10:26:03.136346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.644 [2024-11-06 10:26:03.136352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.644 [2024-11-06 10:26:03.136366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.644 qpair failed and we were unable to recover it. 00:33:59.906 [2024-11-06 10:26:03.146292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.906 [2024-11-06 10:26:03.146343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.906 [2024-11-06 10:26:03.146357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.906 [2024-11-06 10:26:03.146364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.906 [2024-11-06 10:26:03.146371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.906 [2024-11-06 10:26:03.146384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.906 qpair failed and we were unable to recover it. 00:33:59.906 [2024-11-06 10:26:03.156273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.906 [2024-11-06 10:26:03.156319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.906 [2024-11-06 10:26:03.156333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.906 [2024-11-06 10:26:03.156341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.906 [2024-11-06 10:26:03.156347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.906 [2024-11-06 10:26:03.156361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.906 qpair failed and we were unable to recover it. 00:33:59.906 [2024-11-06 10:26:03.166256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.906 [2024-11-06 10:26:03.166313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.906 [2024-11-06 10:26:03.166326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.906 [2024-11-06 10:26:03.166334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.166340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.166357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.176260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.176310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.176325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.176333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.176340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.176357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.186389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.186442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.186457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.186464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.186470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.186485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.196423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.196474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.196489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.196496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.196502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.196516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.206359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.206414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.206428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.206436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.206442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.206456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.216482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.216534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.216548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.216556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.216562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.216576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.226482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.226530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.226543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.226551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.226558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.226571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.236520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.236567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.236581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.236588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.236595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.236609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.246596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.246651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.246665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.246673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.246679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.246693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.256594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.256656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.256673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.256680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.256686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.256700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.266615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.266662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.266675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.266683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.266690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.266703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.276627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.276674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.276687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.276694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.907 [2024-11-06 10:26:03.276701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.907 [2024-11-06 10:26:03.276714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.907 qpair failed and we were unable to recover it. 00:33:59.907 [2024-11-06 10:26:03.286697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.907 [2024-11-06 10:26:03.286753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.907 [2024-11-06 10:26:03.286767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.907 [2024-11-06 10:26:03.286775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.286781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.286795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:33:59.908 [2024-11-06 10:26:03.296683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.908 [2024-11-06 10:26:03.296734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.908 [2024-11-06 10:26:03.296748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.908 [2024-11-06 10:26:03.296755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.296762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.296779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:33:59.908 [2024-11-06 10:26:03.306689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.908 [2024-11-06 10:26:03.306735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.908 [2024-11-06 10:26:03.306748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.908 [2024-11-06 10:26:03.306756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.306762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.306776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:33:59.908 [2024-11-06 10:26:03.316738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.908 [2024-11-06 10:26:03.316786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.908 [2024-11-06 10:26:03.316799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.908 [2024-11-06 10:26:03.316806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.316813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.316827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:33:59.908 [2024-11-06 10:26:03.326800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.908 [2024-11-06 10:26:03.326856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.908 [2024-11-06 10:26:03.326873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.908 [2024-11-06 10:26:03.326881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.326887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.326901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:33:59.908 [2024-11-06 10:26:03.336771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.908 [2024-11-06 10:26:03.336824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.908 [2024-11-06 10:26:03.336837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.908 [2024-11-06 10:26:03.336845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.336851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.336870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:33:59.908 [2024-11-06 10:26:03.346682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.908 [2024-11-06 10:26:03.346729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.908 [2024-11-06 10:26:03.346743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.908 [2024-11-06 10:26:03.346750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.346757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.346770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:33:59.908 [2024-11-06 10:26:03.356837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.908 [2024-11-06 10:26:03.356889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.908 [2024-11-06 10:26:03.356903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.908 [2024-11-06 10:26:03.356911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.356917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.356931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:33:59.908 [2024-11-06 10:26:03.366914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.908 [2024-11-06 10:26:03.366969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.908 [2024-11-06 10:26:03.366982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.908 [2024-11-06 10:26:03.366990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.366997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.367011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:33:59.908 [2024-11-06 10:26:03.376900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.908 [2024-11-06 10:26:03.376949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.908 [2024-11-06 10:26:03.376963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.908 [2024-11-06 10:26:03.376970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.376976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.376990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:33:59.908 [2024-11-06 10:26:03.386788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.908 [2024-11-06 10:26:03.386838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.908 [2024-11-06 10:26:03.386855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.908 [2024-11-06 10:26:03.386865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.386872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.386886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:33:59.908 [2024-11-06 10:26:03.396946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.908 [2024-11-06 10:26:03.396998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.908 [2024-11-06 10:26:03.397012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.908 [2024-11-06 10:26:03.397019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.908 [2024-11-06 10:26:03.397026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:33:59.908 [2024-11-06 10:26:03.397040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:59.908 qpair failed and we were unable to recover it. 00:34:00.171 [2024-11-06 10:26:03.406939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.171 [2024-11-06 10:26:03.407029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.171 [2024-11-06 10:26:03.407043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.171 [2024-11-06 10:26:03.407050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.171 [2024-11-06 10:26:03.407057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.171 [2024-11-06 10:26:03.407070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.171 qpair failed and we were unable to recover it. 00:34:00.171 [2024-11-06 10:26:03.417082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.171 [2024-11-06 10:26:03.417137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.171 [2024-11-06 10:26:03.417151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.171 [2024-11-06 10:26:03.417159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.171 [2024-11-06 10:26:03.417165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.171 [2024-11-06 10:26:03.417179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.171 qpair failed and we were unable to recover it. 00:34:00.171 [2024-11-06 10:26:03.427028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.171 [2024-11-06 10:26:03.427078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.171 [2024-11-06 10:26:03.427091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.171 [2024-11-06 10:26:03.427099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.171 [2024-11-06 10:26:03.427106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.171 [2024-11-06 10:26:03.427127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.171 qpair failed and we were unable to recover it. 00:34:00.171 [2024-11-06 10:26:03.437047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.171 [2024-11-06 10:26:03.437098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.171 [2024-11-06 10:26:03.437112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.171 [2024-11-06 10:26:03.437119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.171 [2024-11-06 10:26:03.437126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.171 [2024-11-06 10:26:03.437140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.171 qpair failed and we were unable to recover it. 00:34:00.171 [2024-11-06 10:26:03.447129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.171 [2024-11-06 10:26:03.447183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.171 [2024-11-06 10:26:03.447197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.171 [2024-11-06 10:26:03.447204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.171 [2024-11-06 10:26:03.447211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.171 [2024-11-06 10:26:03.447225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.171 qpair failed and we were unable to recover it. 00:34:00.171 [2024-11-06 10:26:03.457122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.171 [2024-11-06 10:26:03.457175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.171 [2024-11-06 10:26:03.457188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.171 [2024-11-06 10:26:03.457196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.171 [2024-11-06 10:26:03.457202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.171 [2024-11-06 10:26:03.457216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.171 qpair failed and we were unable to recover it. 00:34:00.171 [2024-11-06 10:26:03.467100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.467148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.467161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.467168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.467175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.467189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.477178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.477235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.477248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.477256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.477262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.477276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.487171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.487251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.487269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.487277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.487285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.487301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.497223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.497279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.497294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.497301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.497308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.497322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.507226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.507284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.507297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.507305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.507311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.507325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.517221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.517268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.517285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.517293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.517299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.517313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.527250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.527346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.527360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.527368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.527375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.527389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.537305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.537368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.537383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.537390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.537397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.537411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.547352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.547407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.547421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.547428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.547435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.547448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.557236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.557284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.557298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.557306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.557316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.557330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.567411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.567465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.567479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.567486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.567493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.567506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.577440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.577489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.577503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.577510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.577517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.172 [2024-11-06 10:26:03.577531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.172 qpair failed and we were unable to recover it. 00:34:00.172 [2024-11-06 10:26:03.587451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.172 [2024-11-06 10:26:03.587499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.172 [2024-11-06 10:26:03.587513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.172 [2024-11-06 10:26:03.587520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.172 [2024-11-06 10:26:03.587527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.173 [2024-11-06 10:26:03.587541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.173 qpair failed and we were unable to recover it. 00:34:00.173 [2024-11-06 10:26:03.597423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.173 [2024-11-06 10:26:03.597467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.173 [2024-11-06 10:26:03.597481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.173 [2024-11-06 10:26:03.597488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.173 [2024-11-06 10:26:03.597494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.173 [2024-11-06 10:26:03.597508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.173 qpair failed and we were unable to recover it. 00:34:00.173 [2024-11-06 10:26:03.607464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.173 [2024-11-06 10:26:03.607520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.173 [2024-11-06 10:26:03.607535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.173 [2024-11-06 10:26:03.607543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.173 [2024-11-06 10:26:03.607549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.173 [2024-11-06 10:26:03.607564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.173 qpair failed and we were unable to recover it. 00:34:00.173 [2024-11-06 10:26:03.617539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.173 [2024-11-06 10:26:03.617592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.173 [2024-11-06 10:26:03.617606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.173 [2024-11-06 10:26:03.617613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.173 [2024-11-06 10:26:03.617620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.173 [2024-11-06 10:26:03.617634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.173 qpair failed and we were unable to recover it. 00:34:00.173 [2024-11-06 10:26:03.627583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.173 [2024-11-06 10:26:03.627659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.173 [2024-11-06 10:26:03.627673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.173 [2024-11-06 10:26:03.627681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.173 [2024-11-06 10:26:03.627688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.173 [2024-11-06 10:26:03.627702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.173 qpair failed and we were unable to recover it. 00:34:00.173 [2024-11-06 10:26:03.637560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.173 [2024-11-06 10:26:03.637609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.173 [2024-11-06 10:26:03.637623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.173 [2024-11-06 10:26:03.637630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.173 [2024-11-06 10:26:03.637637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.173 [2024-11-06 10:26:03.637650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.173 qpair failed and we were unable to recover it. 00:34:00.173 [2024-11-06 10:26:03.647658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.173 [2024-11-06 10:26:03.647715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.173 [2024-11-06 10:26:03.647732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.173 [2024-11-06 10:26:03.647739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.173 [2024-11-06 10:26:03.647746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.173 [2024-11-06 10:26:03.647759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.173 qpair failed and we were unable to recover it. 00:34:00.173 [2024-11-06 10:26:03.657643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.173 [2024-11-06 10:26:03.657699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.173 [2024-11-06 10:26:03.657712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.173 [2024-11-06 10:26:03.657720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.173 [2024-11-06 10:26:03.657727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.173 [2024-11-06 10:26:03.657740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.173 qpair failed and we were unable to recover it. 00:34:00.173 [2024-11-06 10:26:03.667660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.173 [2024-11-06 10:26:03.667705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.173 [2024-11-06 10:26:03.667718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.173 [2024-11-06 10:26:03.667726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.173 [2024-11-06 10:26:03.667732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.173 [2024-11-06 10:26:03.667746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.173 qpair failed and we were unable to recover it. 00:34:00.435 [2024-11-06 10:26:03.677613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.435 [2024-11-06 10:26:03.677665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.435 [2024-11-06 10:26:03.677679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.435 [2024-11-06 10:26:03.677686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.435 [2024-11-06 10:26:03.677693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.435 [2024-11-06 10:26:03.677707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.435 qpair failed and we were unable to recover it. 00:34:00.435 [2024-11-06 10:26:03.687635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.435 [2024-11-06 10:26:03.687690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.435 [2024-11-06 10:26:03.687704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.435 [2024-11-06 10:26:03.687712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.435 [2024-11-06 10:26:03.687722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.435 [2024-11-06 10:26:03.687736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.435 qpair failed and we were unable to recover it. 00:34:00.435 [2024-11-06 10:26:03.697766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.435 [2024-11-06 10:26:03.697817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.435 [2024-11-06 10:26:03.697832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.435 [2024-11-06 10:26:03.697839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.435 [2024-11-06 10:26:03.697845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.435 [2024-11-06 10:26:03.697860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.435 qpair failed and we were unable to recover it. 00:34:00.435 [2024-11-06 10:26:03.707701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.435 [2024-11-06 10:26:03.707751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.435 [2024-11-06 10:26:03.707766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.435 [2024-11-06 10:26:03.707774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.435 [2024-11-06 10:26:03.707780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.435 [2024-11-06 10:26:03.707795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.435 qpair failed and we were unable to recover it. 00:34:00.435 [2024-11-06 10:26:03.717715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.435 [2024-11-06 10:26:03.717762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.435 [2024-11-06 10:26:03.717776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.435 [2024-11-06 10:26:03.717783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.435 [2024-11-06 10:26:03.717790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.435 [2024-11-06 10:26:03.717804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.435 qpair failed and we were unable to recover it. 00:34:00.435 [2024-11-06 10:26:03.727849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.435 [2024-11-06 10:26:03.727913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.435 [2024-11-06 10:26:03.727927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.435 [2024-11-06 10:26:03.727935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.435 [2024-11-06 10:26:03.727941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.435 [2024-11-06 10:26:03.727956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.435 qpair failed and we were unable to recover it. 00:34:00.435 [2024-11-06 10:26:03.737898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.435 [2024-11-06 10:26:03.737949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.435 [2024-11-06 10:26:03.737963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.435 [2024-11-06 10:26:03.737971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.435 [2024-11-06 10:26:03.737978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2017490 00:34:00.435 [2024-11-06 10:26:03.737992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:00.435 qpair failed and we were unable to recover it. 00:34:00.435 [2024-11-06 10:26:03.747902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.435 [2024-11-06 10:26:03.747998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.435 [2024-11-06 10:26:03.748063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.435 [2024-11-06 10:26:03.748088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.435 [2024-11-06 10:26:03.748109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb59c000b90 00:34:00.435 [2024-11-06 10:26:03.748167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.435 qpair failed and we were unable to recover it. 00:34:00.435 [2024-11-06 10:26:03.757909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.435 [2024-11-06 10:26:03.758004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.435 [2024-11-06 10:26:03.758033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.435 [2024-11-06 10:26:03.758050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.435 [2024-11-06 10:26:03.758064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb59c000b90 00:34:00.435 [2024-11-06 10:26:03.758097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.435 qpair failed and we were unable to recover it. 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Write completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Write completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Write completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Write completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Write completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Write completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Write completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Write completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Read completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.435 Write completed with error (sct=0, sc=8) 00:34:00.435 starting I/O failed 00:34:00.436 Read completed with error (sct=0, sc=8) 00:34:00.436 starting I/O failed 00:34:00.436 Write completed with error (sct=0, sc=8) 00:34:00.436 starting I/O failed 00:34:00.436 Write completed with error (sct=0, sc=8) 00:34:00.436 starting I/O failed 00:34:00.436 Read completed with error (sct=0, sc=8) 00:34:00.436 starting I/O failed 00:34:00.436 Write completed with error (sct=0, sc=8) 00:34:00.436 starting I/O failed 00:34:00.436 [2024-11-06 10:26:03.759009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.436 [2024-11-06 10:26:03.768012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.436 [2024-11-06 10:26:03.768121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.436 [2024-11-06 10:26:03.768169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.436 [2024-11-06 10:26:03.768193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.436 [2024-11-06 10:26:03.768214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb590000b90 00:34:00.436 [2024-11-06 10:26:03.768262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.436 qpair failed and we were unable to recover it. 00:34:00.436 [2024-11-06 10:26:03.777896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.436 [2024-11-06 10:26:03.778000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.436 [2024-11-06 10:26:03.778029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.436 [2024-11-06 10:26:03.778046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.436 [2024-11-06 10:26:03.778060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb590000b90 00:34:00.436 [2024-11-06 10:26:03.778091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.436 qpair failed and we were unable to recover it. 00:34:00.436 [2024-11-06 10:26:03.778264] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:34:00.436 A controller has encountered a failure and is being reset. 00:34:00.436 [2024-11-06 10:26:03.778390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2014020 (9): Bad file descriptor 00:34:00.436 Controller properly reset. 00:34:00.436 Initializing NVMe Controllers 00:34:00.436 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:00.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:00.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:00.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:00.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:00.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:00.436 Initialization complete. Launching workers. 00:34:00.436 Starting thread on core 1 00:34:00.436 Starting thread on core 2 00:34:00.436 Starting thread on core 3 00:34:00.436 Starting thread on core 0 00:34:00.436 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:00.436 00:34:00.436 real 0m11.381s 00:34:00.436 user 0m21.668s 00:34:00.436 sys 0m3.666s 00:34:00.436 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:00.436 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:00.436 ************************************ 00:34:00.436 END TEST nvmf_target_disconnect_tc2 00:34:00.436 ************************************ 00:34:00.436 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:00.436 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:00.697 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:00.697 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:00.697 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:34:00.697 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:00.697 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:34:00.697 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:00.697 10:26:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:00.697 rmmod nvme_tcp 00:34:00.697 rmmod nvme_fabrics 00:34:00.697 rmmod nvme_keyring 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 4099296 ']' 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 4099296 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 4099296 ']' 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 4099296 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4099296 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4099296' 00:34:00.697 killing process with pid 4099296 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 4099296 00:34:00.697 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 4099296 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:00.958 10:26:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.870 10:26:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:02.870 00:34:02.870 real 0m22.543s 00:34:02.870 user 0m49.790s 00:34:02.870 sys 0m10.360s 00:34:02.870 10:26:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:02.870 10:26:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:02.870 ************************************ 00:34:02.870 END TEST nvmf_target_disconnect 00:34:02.870 ************************************ 00:34:02.870 10:26:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:02.870 00:34:02.870 real 6m46.710s 00:34:02.870 user 11m32.617s 00:34:02.870 sys 2m23.356s 00:34:02.870 10:26:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:02.870 10:26:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.870 ************************************ 00:34:02.870 END TEST nvmf_host 00:34:02.870 ************************************ 00:34:03.130 10:26:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:34:03.130 10:26:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:34:03.130 10:26:06 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:03.130 10:26:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:03.130 10:26:06 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:03.130 10:26:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:03.131 ************************************ 00:34:03.131 START TEST nvmf_target_core_interrupt_mode 00:34:03.131 ************************************ 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:03.131 * Looking for test storage... 00:34:03.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:03.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.131 --rc genhtml_branch_coverage=1 00:34:03.131 --rc genhtml_function_coverage=1 00:34:03.131 --rc genhtml_legend=1 00:34:03.131 --rc geninfo_all_blocks=1 00:34:03.131 --rc geninfo_unexecuted_blocks=1 00:34:03.131 00:34:03.131 ' 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:03.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.131 --rc genhtml_branch_coverage=1 00:34:03.131 --rc genhtml_function_coverage=1 00:34:03.131 --rc genhtml_legend=1 00:34:03.131 --rc geninfo_all_blocks=1 00:34:03.131 --rc geninfo_unexecuted_blocks=1 00:34:03.131 00:34:03.131 ' 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:03.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.131 --rc genhtml_branch_coverage=1 00:34:03.131 --rc genhtml_function_coverage=1 00:34:03.131 --rc genhtml_legend=1 00:34:03.131 --rc geninfo_all_blocks=1 00:34:03.131 --rc geninfo_unexecuted_blocks=1 00:34:03.131 00:34:03.131 ' 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:03.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.131 --rc genhtml_branch_coverage=1 00:34:03.131 --rc genhtml_function_coverage=1 00:34:03.131 --rc genhtml_legend=1 00:34:03.131 --rc geninfo_all_blocks=1 00:34:03.131 --rc geninfo_unexecuted_blocks=1 00:34:03.131 00:34:03.131 ' 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.131 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:34:03.392 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:03.393 ************************************ 00:34:03.393 START TEST nvmf_abort 00:34:03.393 ************************************ 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:03.393 * Looking for test storage... 00:34:03.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:03.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.393 --rc genhtml_branch_coverage=1 00:34:03.393 --rc genhtml_function_coverage=1 00:34:03.393 --rc genhtml_legend=1 00:34:03.393 --rc geninfo_all_blocks=1 00:34:03.393 --rc geninfo_unexecuted_blocks=1 00:34:03.393 00:34:03.393 ' 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:03.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.393 --rc genhtml_branch_coverage=1 00:34:03.393 --rc genhtml_function_coverage=1 00:34:03.393 --rc genhtml_legend=1 00:34:03.393 --rc geninfo_all_blocks=1 00:34:03.393 --rc geninfo_unexecuted_blocks=1 00:34:03.393 00:34:03.393 ' 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:03.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.393 --rc genhtml_branch_coverage=1 00:34:03.393 --rc genhtml_function_coverage=1 00:34:03.393 --rc genhtml_legend=1 00:34:03.393 --rc geninfo_all_blocks=1 00:34:03.393 --rc geninfo_unexecuted_blocks=1 00:34:03.393 00:34:03.393 ' 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:03.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.393 --rc genhtml_branch_coverage=1 00:34:03.393 --rc genhtml_function_coverage=1 00:34:03.393 --rc genhtml_legend=1 00:34:03.393 --rc geninfo_all_blocks=1 00:34:03.393 --rc geninfo_unexecuted_blocks=1 00:34:03.393 00:34:03.393 ' 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.393 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.394 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.394 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.394 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.394 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.394 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.654 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:34:03.655 10:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:11.885 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:11.885 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:11.885 Found net devices under 0000:31:00.0: cvl_0_0 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:11.885 Found net devices under 0000:31:00.1: cvl_0_1 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.885 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.886 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.886 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.886 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.886 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.886 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.886 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.886 10:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:34:11.886 00:34:11.886 --- 10.0.0.2 ping statistics --- 00:34:11.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.886 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:34:11.886 00:34:11.886 --- 10.0.0.1 ping statistics --- 00:34:11.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.886 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4105355 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4105355 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 4105355 ']' 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:11.886 10:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:11.886 [2024-11-06 10:26:15.348415] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:11.886 [2024-11-06 10:26:15.349445] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:34:11.886 [2024-11-06 10:26:15.349486] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.147 [2024-11-06 10:26:15.455331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:12.147 [2024-11-06 10:26:15.506915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.147 [2024-11-06 10:26:15.506964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.147 [2024-11-06 10:26:15.506974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.147 [2024-11-06 10:26:15.506981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.147 [2024-11-06 10:26:15.506988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.147 [2024-11-06 10:26:15.508635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:12.147 [2024-11-06 10:26:15.508801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.147 [2024-11-06 10:26:15.508802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:12.147 [2024-11-06 10:26:15.584123] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:12.147 [2024-11-06 10:26:15.584189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:12.147 [2024-11-06 10:26:15.584827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:12.148 [2024-11-06 10:26:15.585138] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:12.719 [2024-11-06 10:26:16.201672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.719 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:12.980 Malloc0 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:12.980 Delay0 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:12.980 [2024-11-06 10:26:16.285535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.980 10:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:34:12.980 [2024-11-06 10:26:16.367570] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:15.525 Initializing NVMe Controllers 00:34:15.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:15.525 controller IO queue size 128 less than required 00:34:15.525 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:34:15.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:34:15.525 Initialization complete. Launching workers. 00:34:15.525 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28839 00:34:15.525 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28896, failed to submit 66 00:34:15.525 success 28839, unsuccessful 57, failed 0 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:15.525 rmmod nvme_tcp 00:34:15.525 rmmod nvme_fabrics 00:34:15.525 rmmod nvme_keyring 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4105355 ']' 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4105355 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 4105355 ']' 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 4105355 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4105355 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:15.525 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4105355' 00:34:15.526 killing process with pid 4105355 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 4105355 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 4105355 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.526 10:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.439 10:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:17.439 00:34:17.439 real 0m14.108s 00:34:17.439 user 0m10.759s 00:34:17.439 sys 0m7.526s 00:34:17.439 10:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:17.439 10:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:17.439 ************************************ 00:34:17.439 END TEST nvmf_abort 00:34:17.439 ************************************ 00:34:17.439 10:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:17.439 10:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:17.439 10:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:17.439 10:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:17.439 ************************************ 00:34:17.439 START TEST nvmf_ns_hotplug_stress 00:34:17.439 ************************************ 00:34:17.439 10:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:17.700 * Looking for test storage... 00:34:17.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:17.700 10:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:17.700 10:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:17.700 10:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:34:17.700 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:17.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.701 --rc genhtml_branch_coverage=1 00:34:17.701 --rc genhtml_function_coverage=1 00:34:17.701 --rc genhtml_legend=1 00:34:17.701 --rc geninfo_all_blocks=1 00:34:17.701 --rc geninfo_unexecuted_blocks=1 00:34:17.701 00:34:17.701 ' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:17.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.701 --rc genhtml_branch_coverage=1 00:34:17.701 --rc genhtml_function_coverage=1 00:34:17.701 --rc genhtml_legend=1 00:34:17.701 --rc geninfo_all_blocks=1 00:34:17.701 --rc geninfo_unexecuted_blocks=1 00:34:17.701 00:34:17.701 ' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:17.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.701 --rc genhtml_branch_coverage=1 00:34:17.701 --rc genhtml_function_coverage=1 00:34:17.701 --rc genhtml_legend=1 00:34:17.701 --rc geninfo_all_blocks=1 00:34:17.701 --rc geninfo_unexecuted_blocks=1 00:34:17.701 00:34:17.701 ' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:17.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.701 --rc genhtml_branch_coverage=1 00:34:17.701 --rc genhtml_function_coverage=1 00:34:17.701 --rc genhtml_legend=1 00:34:17.701 --rc geninfo_all_blocks=1 00:34:17.701 --rc geninfo_unexecuted_blocks=1 00:34:17.701 00:34:17.701 ' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.701 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:17.702 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:17.702 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:34:17.702 10:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:25.847 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:25.847 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:25.847 Found net devices under 0000:31:00.0: cvl_0_0 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:25.847 Found net devices under 0000:31:00.1: cvl_0_1 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.847 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:25.848 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:25.848 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.848 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:26.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:34:26.109 00:34:26.109 --- 10.0.0.2 ping statistics --- 00:34:26.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.109 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:26.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:34:26.109 00:34:26.109 --- 10.0.0.1 ping statistics --- 00:34:26.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.109 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4110658 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4110658 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 4110658 ']' 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:26.109 10:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:26.370 [2024-11-06 10:26:29.658874] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:26.370 [2024-11-06 10:26:29.660068] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:34:26.370 [2024-11-06 10:26:29.660121] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.370 [2024-11-06 10:26:29.766160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:26.370 [2024-11-06 10:26:29.816849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.370 [2024-11-06 10:26:29.816911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.370 [2024-11-06 10:26:29.816920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.370 [2024-11-06 10:26:29.816928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.370 [2024-11-06 10:26:29.816934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.370 [2024-11-06 10:26:29.818765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:26.370 [2024-11-06 10:26:29.818935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:26.370 [2024-11-06 10:26:29.818962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.630 [2024-11-06 10:26:29.894522] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:26.630 [2024-11-06 10:26:29.894590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:26.630 [2024-11-06 10:26:29.895201] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:26.631 [2024-11-06 10:26:29.895499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:27.202 10:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:27.202 10:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:34:27.202 10:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:27.202 10:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:27.202 10:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:27.202 10:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:27.202 10:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:34:27.202 10:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:27.202 [2024-11-06 10:26:30.667957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:27.202 10:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:27.462 10:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:27.723 [2024-11-06 10:26:31.016714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:27.723 10:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:27.723 10:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:34:27.984 Malloc0 00:34:27.984 10:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:28.245 Delay0 00:34:28.245 10:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:28.505 10:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:34:28.505 NULL1 00:34:28.765 10:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:34:28.765 10:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:34:28.765 10:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4111141 00:34:28.765 10:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:28.765 10:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:29.025 10:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:29.286 10:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:34:29.286 10:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:34:29.286 true 00:34:29.286 10:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:29.286 10:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:29.546 10:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:29.805 10:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:34:29.805 10:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:34:29.805 true 00:34:29.805 10:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:29.805 10:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:30.066 10:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:30.327 10:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:34:30.327 10:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:34:30.327 true 00:34:30.587 10:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:30.587 10:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:30.587 10:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:30.849 10:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:34:30.849 10:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:34:31.109 true 00:34:31.109 10:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:31.109 10:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:31.109 10:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:31.369 10:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:34:31.370 10:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:34:31.631 true 00:34:31.631 10:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:31.631 10:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:31.892 10:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:31.892 10:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:34:31.892 10:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:34:32.153 true 00:34:32.153 10:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:32.153 10:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:32.414 10:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:32.675 10:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:34:32.675 10:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:34:32.675 true 00:34:32.675 10:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:32.675 10:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:32.936 10:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:33.215 10:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:34:33.215 10:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:34:33.215 true 00:34:33.215 10:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:33.215 10:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:33.476 10:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:33.737 10:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:34:33.737 10:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:34:33.737 true 00:34:33.737 10:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:33.737 10:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:33.998 10:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:34.258 10:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:34:34.258 10:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:34:34.258 true 00:34:34.519 10:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:34.519 10:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:34.519 10:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:34.780 10:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:34:34.780 10:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:34:34.780 true 00:34:35.040 10:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:35.040 10:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:35.040 10:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:35.302 10:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:34:35.302 10:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:34:35.562 true 00:34:35.562 10:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:35.562 10:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:35.562 10:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:35.823 10:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:34:35.823 10:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:34:36.085 true 00:34:36.085 10:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:36.085 10:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:36.085 10:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:36.344 10:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:34:36.344 10:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:34:36.605 true 00:34:36.605 10:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:36.605 10:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:36.605 10:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:36.866 10:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:34:36.866 10:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:34:37.126 true 00:34:37.126 10:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:37.126 10:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:37.387 10:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:37.387 10:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:34:37.387 10:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:34:37.648 true 00:34:37.648 10:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:37.648 10:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:37.910 10:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:37.910 10:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:34:37.910 10:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:34:38.170 true 00:34:38.170 10:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:38.170 10:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:38.431 10:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:38.431 10:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:34:38.431 10:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:34:38.693 true 00:34:38.693 10:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:38.693 10:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:38.954 10:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:38.954 10:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:34:38.954 10:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:34:39.214 true 00:34:39.214 10:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:39.214 10:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:39.475 10:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:39.735 10:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:34:39.735 10:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:34:39.735 true 00:34:39.735 10:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:39.736 10:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:39.996 10:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:40.257 10:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:34:40.257 10:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:34:40.257 true 00:34:40.257 10:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:40.257 10:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:40.521 10:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:40.521 10:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:34:40.521 10:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:34:40.781 true 00:34:40.781 10:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:40.781 10:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:41.043 10:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:41.043 10:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:34:41.043 10:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:34:41.304 true 00:34:41.304 10:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:41.304 10:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:41.564 10:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:41.564 10:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:34:41.564 10:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:34:41.826 true 00:34:41.826 10:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:41.826 10:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:42.088 10:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:42.088 10:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:34:42.088 10:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:34:42.348 true 00:34:42.348 10:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:42.348 10:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:42.608 10:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:42.868 10:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:34:42.869 10:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:34:42.869 true 00:34:42.869 10:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:42.869 10:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:43.129 10:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:43.389 10:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:34:43.389 10:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:34:43.389 true 00:34:43.389 10:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:43.389 10:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:43.650 10:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:43.650 10:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:34:43.651 10:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:34:43.911 true 00:34:43.911 10:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:43.911 10:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:44.171 10:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:44.432 10:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:34:44.432 10:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:34:44.432 true 00:34:44.432 10:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:44.432 10:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:44.693 10:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:44.955 10:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:34:44.955 10:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:34:44.955 true 00:34:44.955 10:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:44.955 10:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:45.215 10:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:45.476 10:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:34:45.476 10:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:34:45.476 true 00:34:45.476 10:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:45.476 10:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:45.738 10:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:45.738 10:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:34:45.738 10:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:34:45.998 true 00:34:45.999 10:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:45.999 10:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:46.260 10:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:46.521 10:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:34:46.521 10:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:34:46.521 true 00:34:46.521 10:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:46.521 10:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:46.781 10:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:47.043 10:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:34:47.043 10:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:34:47.043 true 00:34:47.043 10:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:47.043 10:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:47.304 10:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:47.566 10:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:34:47.566 10:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:34:47.566 true 00:34:47.566 10:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:47.566 10:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:47.827 10:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:47.827 10:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:34:47.827 10:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:34:48.088 true 00:34:48.088 10:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:48.088 10:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:48.349 10:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:48.349 10:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:34:48.349 10:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:34:48.610 true 00:34:48.610 10:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:48.610 10:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:48.870 10:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:48.870 10:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:34:49.131 10:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:34:49.131 true 00:34:49.131 10:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:49.131 10:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:49.391 10:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:49.653 10:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:34:49.653 10:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:34:49.653 true 00:34:49.653 10:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:49.653 10:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:49.913 10:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:50.175 10:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:34:50.175 10:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:34:50.175 true 00:34:50.175 10:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:50.175 10:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:50.437 10:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:50.699 10:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:34:50.699 10:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:34:50.699 true 00:34:50.699 10:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:50.699 10:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:50.959 10:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:51.220 10:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:34:51.220 10:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:34:51.481 true 00:34:51.481 10:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:51.481 10:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:51.481 10:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:51.741 10:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:34:51.741 10:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:34:52.001 true 00:34:52.001 10:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:52.001 10:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:52.001 10:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:52.262 10:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:34:52.262 10:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:34:52.524 true 00:34:52.524 10:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:52.524 10:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:52.524 10:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:52.785 10:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:34:52.785 10:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:34:53.046 true 00:34:53.046 10:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:53.046 10:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:53.307 10:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:53.307 10:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:34:53.307 10:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:34:53.567 true 00:34:53.567 10:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:53.567 10:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:53.828 10:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:53.828 10:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:34:53.828 10:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:34:54.089 true 00:34:54.089 10:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:54.089 10:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:54.350 10:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:54.612 10:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:34:54.612 10:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:34:54.612 true 00:34:54.612 10:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:54.612 10:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:54.873 10:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:55.134 10:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:34:55.134 10:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:34:55.134 true 00:34:55.134 10:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:55.134 10:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:55.394 10:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:55.655 10:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:34:55.655 10:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:34:55.655 true 00:34:55.655 10:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:55.655 10:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:55.915 10:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:56.175 10:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:34:56.175 10:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:34:56.175 true 00:34:56.175 10:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:56.175 10:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:56.436 10:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:56.696 10:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:34:56.696 10:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:34:56.957 true 00:34:56.957 10:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:56.957 10:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:56.957 10:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:57.218 10:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:34:57.218 10:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:34:57.478 true 00:34:57.478 10:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:57.478 10:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:57.478 10:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:57.738 10:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:34:57.738 10:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:34:57.999 true 00:34:57.999 10:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:57.999 10:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:58.259 10:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:58.259 10:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:34:58.259 10:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:34:58.520 true 00:34:58.520 10:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:58.520 10:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:58.780 10:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:58.780 10:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:34:58.780 10:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:34:59.041 true 00:34:59.041 10:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:59.041 10:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:59.041 Initializing NVMe Controllers 00:34:59.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:59.041 Controller IO queue size 128, less than required. 00:34:59.041 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:59.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:59.041 Initialization complete. Launching workers. 00:34:59.041 ======================================================== 00:34:59.041 Latency(us) 00:34:59.041 Device Information : IOPS MiB/s Average min max 00:34:59.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29766.62 14.53 4300.05 1494.20 11122.08 00:34:59.041 ======================================================== 00:34:59.041 Total : 29766.62 14.53 4300.05 1494.20 11122.08 00:34:59.041 00:34:59.337 10:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:59.337 10:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:34:59.337 10:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:34:59.649 true 00:34:59.649 10:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4111141 00:34:59.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4111141) - No such process 00:34:59.649 10:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4111141 00:34:59.649 10:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:59.933 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:59.933 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:34:59.933 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:34:59.933 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:34:59.933 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:59.933 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:35:00.193 null0 00:35:00.193 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:00.193 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:00.193 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:35:00.193 null1 00:35:00.193 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:00.193 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:00.193 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:35:00.454 null2 00:35:00.454 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:00.454 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:00.454 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:35:00.714 null3 00:35:00.714 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:00.714 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:00.714 10:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:35:00.714 null4 00:35:00.714 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:00.714 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:00.714 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:35:00.974 null5 00:35:00.975 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:00.975 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:00.975 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:35:00.975 null6 00:35:00.975 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:00.975 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:00.975 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:35:01.235 null7 00:35:01.235 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:01.235 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:01.235 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:35:01.235 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4117432 4117434 4117436 4117438 4117440 4117442 4117444 4117446 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.236 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.497 10:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:01.759 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.021 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.283 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:02.545 10:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:02.545 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.545 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:02.545 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.545 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:02.545 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:02.807 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.069 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:03.331 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.331 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.331 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:03.331 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:03.332 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:03.594 10:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:03.594 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.594 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.594 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:03.594 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.594 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.594 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:03.594 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:03.856 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:04.117 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.379 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:04.641 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:04.641 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:04.641 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:04.641 10:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.641 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:04.902 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:05.163 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:05.164 rmmod nvme_tcp 00:35:05.164 rmmod nvme_fabrics 00:35:05.164 rmmod nvme_keyring 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4110658 ']' 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4110658 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 4110658 ']' 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 4110658 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:05.164 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4110658 00:35:05.425 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:05.425 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:05.425 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4110658' 00:35:05.425 killing process with pid 4110658 00:35:05.425 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 4110658 00:35:05.425 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 4110658 00:35:05.425 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:05.425 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:05.426 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:05.426 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:35:05.426 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:35:05.426 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:05.426 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:35:05.426 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:05.426 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:05.426 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.426 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.426 10:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.972 10:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:07.972 00:35:07.972 real 0m50.036s 00:35:07.972 user 3m5.109s 00:35:07.972 sys 0m22.241s 00:35:07.972 10:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:07.972 10:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:07.972 ************************************ 00:35:07.972 END TEST nvmf_ns_hotplug_stress 00:35:07.972 ************************************ 00:35:07.972 10:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:07.972 10:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:07.972 10:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:07.972 10:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:07.972 ************************************ 00:35:07.972 START TEST nvmf_delete_subsystem 00:35:07.972 ************************************ 00:35:07.972 10:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:07.972 * Looking for test storage... 00:35:07.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:07.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.972 --rc genhtml_branch_coverage=1 00:35:07.972 --rc genhtml_function_coverage=1 00:35:07.972 --rc genhtml_legend=1 00:35:07.972 --rc geninfo_all_blocks=1 00:35:07.972 --rc geninfo_unexecuted_blocks=1 00:35:07.972 00:35:07.972 ' 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:07.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.972 --rc genhtml_branch_coverage=1 00:35:07.972 --rc genhtml_function_coverage=1 00:35:07.972 --rc genhtml_legend=1 00:35:07.972 --rc geninfo_all_blocks=1 00:35:07.972 --rc geninfo_unexecuted_blocks=1 00:35:07.972 00:35:07.972 ' 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:07.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.972 --rc genhtml_branch_coverage=1 00:35:07.972 --rc genhtml_function_coverage=1 00:35:07.972 --rc genhtml_legend=1 00:35:07.972 --rc geninfo_all_blocks=1 00:35:07.972 --rc geninfo_unexecuted_blocks=1 00:35:07.972 00:35:07.972 ' 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:07.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.972 --rc genhtml_branch_coverage=1 00:35:07.972 --rc genhtml_function_coverage=1 00:35:07.972 --rc genhtml_legend=1 00:35:07.972 --rc geninfo_all_blocks=1 00:35:07.972 --rc geninfo_unexecuted_blocks=1 00:35:07.972 00:35:07.972 ' 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.972 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:35:07.973 10:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:16.114 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:16.115 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:16.115 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:16.115 Found net devices under 0000:31:00.0: cvl_0_0 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:16.115 Found net devices under 0000:31:00.1: cvl_0_1 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.115 10:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:16.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:35:16.115 00:35:16.115 --- 10.0.0.2 ping statistics --- 00:35:16.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.115 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:35:16.115 00:35:16.115 --- 10.0.0.1 ping statistics --- 00:35:16.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.115 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4123588 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4123588 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 4123588 ']' 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.115 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:16.116 10:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:16.116 [2024-11-06 10:27:19.235655] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:16.116 [2024-11-06 10:27:19.236636] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:35:16.116 [2024-11-06 10:27:19.236674] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.116 [2024-11-06 10:27:19.321520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:16.116 [2024-11-06 10:27:19.356729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.116 [2024-11-06 10:27:19.356766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:16.116 [2024-11-06 10:27:19.356774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.116 [2024-11-06 10:27:19.356781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.116 [2024-11-06 10:27:19.356787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.116 [2024-11-06 10:27:19.357993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.116 [2024-11-06 10:27:19.357995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.116 [2024-11-06 10:27:19.412611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:16.116 [2024-11-06 10:27:19.413079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:16.116 [2024-11-06 10:27:19.413433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:16.686 [2024-11-06 10:27:20.086558] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:16.686 [2024-11-06 10:27:20.114848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:16.686 NULL1 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:16.686 Delay0 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4123739 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:35:16.686 10:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:16.947 [2024-11-06 10:27:20.212852] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:18.862 10:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:18.862 10:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.862 10:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 starting I/O failed: -6 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 starting I/O failed: -6 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 starting I/O failed: -6 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 starting I/O failed: -6 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 starting I/O failed: -6 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 starting I/O failed: -6 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 starting I/O failed: -6 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 starting I/O failed: -6 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 starting I/O failed: -6 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 starting I/O failed: -6 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 starting I/O failed: -6 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 [2024-11-06 10:27:22.292038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169c2c0 is same with the state(6) to be set 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Write completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.862 Read completed with error (sct=0, sc=8) 00:35:18.863 starting I/O failed: -6 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 starting I/O failed: -6 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 starting I/O failed: -6 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 starting I/O failed: -6 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 starting I/O failed: -6 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 starting I/O failed: -6 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 starting I/O failed: -6 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 starting I/O failed: -6 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 starting I/O failed: -6 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 starting I/O failed: -6 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 starting I/O failed: -6 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 [2024-11-06 10:27:22.296538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fac08000c40 is same with the state(6) to be set 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Write completed with error (sct=0, sc=8) 00:35:18.863 Read completed with error (sct=0, sc=8) 00:35:19.806 [2024-11-06 10:27:23.271189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d5e0 is same with the state(6) to be set 00:35:19.806 Read completed with error (sct=0, sc=8) 00:35:19.806 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 [2024-11-06 10:27:23.295787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169c0e0 is same with the state(6) to be set 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 [2024-11-06 10:27:23.295876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169c4a0 is same with the state(6) to be set 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 [2024-11-06 10:27:23.298763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fac0800d7e0 is same with the state(6) to be set 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 Read completed with error (sct=0, sc=8) 00:35:19.807 Write completed with error (sct=0, sc=8) 00:35:19.807 [2024-11-06 10:27:23.298853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fac0800d020 is same with the state(6) to be set 00:35:19.807 Initializing NVMe Controllers 00:35:19.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:19.807 Controller IO queue size 128, less than required. 00:35:19.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:19.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:19.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:19.807 Initialization complete. Launching workers. 00:35:19.807 ======================================================== 00:35:19.807 Latency(us) 00:35:19.807 Device Information : IOPS MiB/s Average min max 00:35:19.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.31 0.08 901531.33 213.58 1006876.60 00:35:19.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.81 0.08 921352.23 323.35 2001233.40 00:35:19.807 ======================================================== 00:35:19.807 Total : 334.12 0.16 911486.09 213.58 2001233.40 00:35:19.807 00:35:19.807 [2024-11-06 10:27:23.299363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169d5e0 (9): Bad file descriptor 00:35:19.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:35:19.807 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.807 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:35:19.807 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4123739 00:35:19.807 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4123739 00:35:20.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4123739) - No such process 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4123739 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 4123739 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 4123739 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:20.378 [2024-11-06 10:27:23.834918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4124416 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4124416 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:20.378 10:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:20.640 [2024-11-06 10:27:23.914904] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:20.901 10:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:20.901 10:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4124416 00:35:20.901 10:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:21.474 10:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:21.474 10:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4124416 00:35:21.474 10:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:22.046 10:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:22.046 10:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4124416 00:35:22.046 10:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:22.618 10:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:22.618 10:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4124416 00:35:22.618 10:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:22.879 10:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:22.879 10:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4124416 00:35:22.879 10:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:23.450 10:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:23.450 10:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4124416 00:35:23.450 10:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:23.711 Initializing NVMe Controllers 00:35:23.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:23.711 Controller IO queue size 128, less than required. 00:35:23.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:23.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:23.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:23.711 Initialization complete. Launching workers. 00:35:23.711 ======================================================== 00:35:23.711 Latency(us) 00:35:23.711 Device Information : IOPS MiB/s Average min max 00:35:23.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002337.75 1000227.49 1005693.92 00:35:23.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004090.74 1000439.66 1010925.47 00:35:23.711 ======================================================== 00:35:23.711 Total : 256.00 0.12 1003214.25 1000227.49 1010925.47 00:35:23.711 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4124416 00:35:23.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4124416) - No such process 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4124416 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:23.972 rmmod nvme_tcp 00:35:23.972 rmmod nvme_fabrics 00:35:23.972 rmmod nvme_keyring 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4123588 ']' 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4123588 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 4123588 ']' 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 4123588 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:23.972 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4123588 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4123588' 00:35:24.234 killing process with pid 4123588 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 4123588 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 4123588 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.234 10:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:26.780 00:35:26.780 real 0m18.741s 00:35:26.780 user 0m26.406s 00:35:26.780 sys 0m7.721s 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:26.780 ************************************ 00:35:26.780 END TEST nvmf_delete_subsystem 00:35:26.780 ************************************ 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:26.780 ************************************ 00:35:26.780 START TEST nvmf_host_management 00:35:26.780 ************************************ 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:26.780 * Looking for test storage... 00:35:26.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:26.780 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:26.781 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:35:26.781 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:26.781 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:26.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.781 --rc genhtml_branch_coverage=1 00:35:26.781 --rc genhtml_function_coverage=1 00:35:26.781 --rc genhtml_legend=1 00:35:26.781 --rc geninfo_all_blocks=1 00:35:26.781 --rc geninfo_unexecuted_blocks=1 00:35:26.781 00:35:26.781 ' 00:35:26.781 10:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:26.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.781 --rc genhtml_branch_coverage=1 00:35:26.781 --rc genhtml_function_coverage=1 00:35:26.781 --rc genhtml_legend=1 00:35:26.781 --rc geninfo_all_blocks=1 00:35:26.781 --rc geninfo_unexecuted_blocks=1 00:35:26.781 00:35:26.781 ' 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:26.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.781 --rc genhtml_branch_coverage=1 00:35:26.781 --rc genhtml_function_coverage=1 00:35:26.781 --rc genhtml_legend=1 00:35:26.781 --rc geninfo_all_blocks=1 00:35:26.781 --rc geninfo_unexecuted_blocks=1 00:35:26.781 00:35:26.781 ' 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:26.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.781 --rc genhtml_branch_coverage=1 00:35:26.781 --rc genhtml_function_coverage=1 00:35:26.781 --rc genhtml_legend=1 00:35:26.781 --rc geninfo_all_blocks=1 00:35:26.781 --rc geninfo_unexecuted_blocks=1 00:35:26.781 00:35:26.781 ' 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:35:26.781 10:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:34.989 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:34.989 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:34.989 Found net devices under 0000:31:00.0: cvl_0_0 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:34.989 Found net devices under 0000:31:00.1: cvl_0_1 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.989 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:34.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:35:34.990 00:35:34.990 --- 10.0.0.2 ping statistics --- 00:35:34.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.990 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:35:34.990 00:35:34.990 --- 10.0.0.1 ping statistics --- 00:35:34.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.990 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4129775 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4129775 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 4129775 ']' 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:34.990 10:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:34.990 [2024-11-06 10:27:38.481829] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:34.990 [2024-11-06 10:27:38.482887] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:35:34.990 [2024-11-06 10:27:38.482928] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.252 [2024-11-06 10:27:38.589944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:35.252 [2024-11-06 10:27:38.643891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.252 [2024-11-06 10:27:38.643947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.252 [2024-11-06 10:27:38.643960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.252 [2024-11-06 10:27:38.643967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.252 [2024-11-06 10:27:38.643973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.252 [2024-11-06 10:27:38.646007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:35.252 [2024-11-06 10:27:38.646155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:35.252 [2024-11-06 10:27:38.646320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.252 [2024-11-06 10:27:38.646321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:35.252 [2024-11-06 10:27:38.722309] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:35.252 [2024-11-06 10:27:38.722976] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:35.252 [2024-11-06 10:27:38.723919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:35.252 [2024-11-06 10:27:38.723974] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:35.252 [2024-11-06 10:27:38.724103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:35.823 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:35.823 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:35:35.823 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:35.823 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:35.823 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:36.085 [2024-11-06 10:27:39.335171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:36.085 Malloc0 00:35:36.085 [2024-11-06 10:27:39.427389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4130144 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4130144 /var/tmp/bdevperf.sock 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 4130144 ']' 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:36.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:36.085 { 00:35:36.085 "params": { 00:35:36.085 "name": "Nvme$subsystem", 00:35:36.085 "trtype": "$TEST_TRANSPORT", 00:35:36.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.085 "adrfam": "ipv4", 00:35:36.085 "trsvcid": "$NVMF_PORT", 00:35:36.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.085 "hdgst": ${hdgst:-false}, 00:35:36.085 "ddgst": ${ddgst:-false} 00:35:36.085 }, 00:35:36.085 "method": "bdev_nvme_attach_controller" 00:35:36.085 } 00:35:36.085 EOF 00:35:36.085 )") 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:35:36.085 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:35:36.086 10:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:36.086 "params": { 00:35:36.086 "name": "Nvme0", 00:35:36.086 "trtype": "tcp", 00:35:36.086 "traddr": "10.0.0.2", 00:35:36.086 "adrfam": "ipv4", 00:35:36.086 "trsvcid": "4420", 00:35:36.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:36.086 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:36.086 "hdgst": false, 00:35:36.086 "ddgst": false 00:35:36.086 }, 00:35:36.086 "method": "bdev_nvme_attach_controller" 00:35:36.086 }' 00:35:36.086 [2024-11-06 10:27:39.542046] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:35:36.086 [2024-11-06 10:27:39.542110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130144 ] 00:35:36.345 [2024-11-06 10:27:39.620760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.345 [2024-11-06 10:27:39.656796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.604 Running I/O for 10 seconds... 00:35:36.864 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:36.864 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:35:36.864 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:35:36.864 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.864 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:36.864 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.864 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:36.864 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:35:36.864 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.127 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:37.127 [2024-11-06 10:27:40.414831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.414998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b800 is same with the state(6) to be set 00:35:37.127 [2024-11-06 10:27:40.415490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.127 [2024-11-06 10:27:40.415531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.127 [2024-11-06 10:27:40.415554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.127 [2024-11-06 10:27:40.415562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.127 [2024-11-06 10:27:40.415572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.127 [2024-11-06 10:27:40.415580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.127 [2024-11-06 10:27:40.415589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.127 [2024-11-06 10:27:40.415597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.127 [2024-11-06 10:27:40.415606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.127 [2024-11-06 10:27:40.415614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.127 [2024-11-06 10:27:40.415623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.127 [2024-11-06 10:27:40.415631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.415989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.415997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.128 [2024-11-06 10:27:40.416311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.128 [2024-11-06 10:27:40.416319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.129 [2024-11-06 10:27:40.416626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763370 is same with the state(6) to be set 00:35:37.129 [2024-11-06 10:27:40.416714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:37.129 [2024-11-06 10:27:40.416726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:37.129 [2024-11-06 10:27:40.416743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:37.129 [2024-11-06 10:27:40.416758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:37.129 [2024-11-06 10:27:40.416773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.416781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x752b00 is same with the state(6) to be set 00:35:37.129 [2024-11-06 10:27:40.418012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:37.129 task offset: 90112 on job bdev=Nvme0n1 fails 00:35:37.129 00:35:37.129 Latency(us) 00:35:37.129 [2024-11-06T09:27:40.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:37.129 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:37.129 Job: Nvme0n1 ended in about 0.50 seconds with error 00:35:37.129 Verification LBA range: start 0x0 length 0x400 00:35:37.129 Nvme0n1 : 0.50 1422.22 88.89 129.29 0.00 40179.08 6062.08 35826.35 00:35:37.129 [2024-11-06T09:27:40.630Z] =================================================================================================================== 00:35:37.129 [2024-11-06T09:27:40.630Z] Total : 1422.22 88.89 129.29 0.00 40179.08 6062.08 35826.35 00:35:37.129 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.129 [2024-11-06 10:27:40.420010] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:37.129 [2024-11-06 10:27:40.420035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x752b00 (9): Bad file descriptor 00:35:37.129 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:37.129 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.129 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:37.129 [2024-11-06 10:27:40.421406] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:35:37.129 [2024-11-06 10:27:40.421476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:35:37.129 [2024-11-06 10:27:40.421498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.129 [2024-11-06 10:27:40.421512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:35:37.129 [2024-11-06 10:27:40.421519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:35:37.129 [2024-11-06 10:27:40.421527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.129 [2024-11-06 10:27:40.421535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x752b00 00:35:37.129 [2024-11-06 10:27:40.421554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x752b00 (9): Bad file descriptor 00:35:37.129 [2024-11-06 10:27:40.421566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:37.129 [2024-11-06 10:27:40.421574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:37.129 [2024-11-06 10:27:40.421583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:37.129 [2024-11-06 10:27:40.421592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:37.129 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.129 10:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:35:38.072 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4130144 00:35:38.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4130144) - No such process 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:38.073 { 00:35:38.073 "params": { 00:35:38.073 "name": "Nvme$subsystem", 00:35:38.073 "trtype": "$TEST_TRANSPORT", 00:35:38.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:38.073 "adrfam": "ipv4", 00:35:38.073 "trsvcid": "$NVMF_PORT", 00:35:38.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:38.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:38.073 "hdgst": ${hdgst:-false}, 00:35:38.073 "ddgst": ${ddgst:-false} 00:35:38.073 }, 00:35:38.073 "method": "bdev_nvme_attach_controller" 00:35:38.073 } 00:35:38.073 EOF 00:35:38.073 )") 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:35:38.073 10:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:38.073 "params": { 00:35:38.073 "name": "Nvme0", 00:35:38.073 "trtype": "tcp", 00:35:38.073 "traddr": "10.0.0.2", 00:35:38.073 "adrfam": "ipv4", 00:35:38.073 "trsvcid": "4420", 00:35:38.073 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:38.073 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:38.073 "hdgst": false, 00:35:38.073 "ddgst": false 00:35:38.073 }, 00:35:38.073 "method": "bdev_nvme_attach_controller" 00:35:38.073 }' 00:35:38.073 [2024-11-06 10:27:41.491002] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:35:38.073 [2024-11-06 10:27:41.491057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130499 ] 00:35:38.073 [2024-11-06 10:27:41.569048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:38.334 [2024-11-06 10:27:41.605010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.334 Running I/O for 1 seconds... 00:35:39.716 1554.00 IOPS, 97.12 MiB/s 00:35:39.716 Latency(us) 00:35:39.716 [2024-11-06T09:27:43.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.716 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:39.716 Verification LBA range: start 0x0 length 0x400 00:35:39.716 Nvme0n1 : 1.02 1598.49 99.91 0.00 0.00 39183.22 2662.40 35826.35 00:35:39.716 [2024-11-06T09:27:43.217Z] =================================================================================================================== 00:35:39.716 [2024-11-06T09:27:43.217Z] Total : 1598.49 99.91 0.00 0.00 39183.22 2662.40 35826.35 00:35:39.716 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:35:39.716 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:35:39.716 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:35:39.716 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:39.716 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:39.717 rmmod nvme_tcp 00:35:39.717 rmmod nvme_fabrics 00:35:39.717 rmmod nvme_keyring 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4129775 ']' 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4129775 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 4129775 ']' 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 4129775 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:39.717 10:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4129775 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4129775' 00:35:39.717 killing process with pid 4129775 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 4129775 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 4129775 00:35:39.717 [2024-11-06 10:27:43.159277] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:39.717 10:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:35:42.263 00:35:42.263 real 0m15.463s 00:35:42.263 user 0m19.248s 00:35:42.263 sys 0m7.979s 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:42.263 ************************************ 00:35:42.263 END TEST nvmf_host_management 00:35:42.263 ************************************ 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:42.263 ************************************ 00:35:42.263 START TEST nvmf_lvol 00:35:42.263 ************************************ 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:42.263 * Looking for test storage... 00:35:42.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:42.263 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:42.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.264 --rc genhtml_branch_coverage=1 00:35:42.264 --rc genhtml_function_coverage=1 00:35:42.264 --rc genhtml_legend=1 00:35:42.264 --rc geninfo_all_blocks=1 00:35:42.264 --rc geninfo_unexecuted_blocks=1 00:35:42.264 00:35:42.264 ' 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:42.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.264 --rc genhtml_branch_coverage=1 00:35:42.264 --rc genhtml_function_coverage=1 00:35:42.264 --rc genhtml_legend=1 00:35:42.264 --rc geninfo_all_blocks=1 00:35:42.264 --rc geninfo_unexecuted_blocks=1 00:35:42.264 00:35:42.264 ' 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:42.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.264 --rc genhtml_branch_coverage=1 00:35:42.264 --rc genhtml_function_coverage=1 00:35:42.264 --rc genhtml_legend=1 00:35:42.264 --rc geninfo_all_blocks=1 00:35:42.264 --rc geninfo_unexecuted_blocks=1 00:35:42.264 00:35:42.264 ' 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:42.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.264 --rc genhtml_branch_coverage=1 00:35:42.264 --rc genhtml_function_coverage=1 00:35:42.264 --rc genhtml_legend=1 00:35:42.264 --rc geninfo_all_blocks=1 00:35:42.264 --rc geninfo_unexecuted_blocks=1 00:35:42.264 00:35:42.264 ' 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:42.264 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:35:42.265 10:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:50.407 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:50.407 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:50.407 Found net devices under 0000:31:00.0: cvl_0_0 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:50.407 Found net devices under 0000:31:00.1: cvl_0_1 00:35:50.407 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:50.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:50.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:35:50.408 00:35:50.408 --- 10.0.0.2 ping statistics --- 00:35:50.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.408 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:35:50.408 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:50.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:50.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:35:50.670 00:35:50.670 --- 10.0.0.1 ping statistics --- 00:35:50.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.670 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4135516 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4135516 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 4135516 ']' 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:50.670 10:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:50.670 [2024-11-06 10:27:54.030639] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:50.670 [2024-11-06 10:27:54.032117] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:35:50.670 [2024-11-06 10:27:54.032187] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:50.670 [2024-11-06 10:27:54.125592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:50.670 [2024-11-06 10:27:54.168796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:50.670 [2024-11-06 10:27:54.168829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:50.670 [2024-11-06 10:27:54.168837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:50.670 [2024-11-06 10:27:54.168844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:50.670 [2024-11-06 10:27:54.168850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:50.670 [2024-11-06 10:27:54.170183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:50.670 [2024-11-06 10:27:54.170270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:50.670 [2024-11-06 10:27:54.170273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:50.930 [2024-11-06 10:27:54.225715] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:50.930 [2024-11-06 10:27:54.226151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:50.930 [2024-11-06 10:27:54.226496] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:50.930 [2024-11-06 10:27:54.226769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:51.502 10:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:51.502 10:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:35:51.502 10:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:51.502 10:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:51.502 10:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:51.502 10:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.502 10:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:51.763 [2024-11-06 10:27:55.030806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.763 10:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:52.024 10:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:35:52.024 10:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:52.024 10:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:35:52.024 10:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:35:52.285 10:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:35:52.545 10:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=477cf7bc-7400-4566-89dd-ba55b0160968 00:35:52.546 10:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 477cf7bc-7400-4566-89dd-ba55b0160968 lvol 20 00:35:52.546 10:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3ef23a72-5c62-475a-bfa3-50b43d6b9309 00:35:52.546 10:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:52.806 10:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3ef23a72-5c62-475a-bfa3-50b43d6b9309 00:35:53.067 10:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:53.067 [2024-11-06 10:27:56.494937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.067 10:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:53.327 10:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4135918 00:35:53.327 10:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:35:53.327 10:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:35:54.268 10:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3ef23a72-5c62-475a-bfa3-50b43d6b9309 MY_SNAPSHOT 00:35:54.528 10:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=90aa4471-4328-4009-b289-c98966dab9b5 00:35:54.528 10:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3ef23a72-5c62-475a-bfa3-50b43d6b9309 30 00:35:54.787 10:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 90aa4471-4328-4009-b289-c98966dab9b5 MY_CLONE 00:35:55.047 10:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=59fb8c1f-4b01-482b-9f72-1904871dcb2b 00:35:55.047 10:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 59fb8c1f-4b01-482b-9f72-1904871dcb2b 00:35:55.307 10:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4135918 00:36:05.303 Initializing NVMe Controllers 00:36:05.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:05.303 Controller IO queue size 128, less than required. 00:36:05.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:05.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:36:05.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:36:05.303 Initialization complete. Launching workers. 00:36:05.303 ======================================================== 00:36:05.303 Latency(us) 00:36:05.303 Device Information : IOPS MiB/s Average min max 00:36:05.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12327.20 48.15 10389.78 1524.80 67557.29 00:36:05.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15505.60 60.57 8255.02 2365.04 49573.92 00:36:05.303 ======================================================== 00:36:05.303 Total : 27832.80 108.72 9200.51 1524.80 67557.29 00:36:05.303 00:36:05.303 10:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3ef23a72-5c62-475a-bfa3-50b43d6b9309 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 477cf7bc-7400-4566-89dd-ba55b0160968 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:05.303 rmmod nvme_tcp 00:36:05.303 rmmod nvme_fabrics 00:36:05.303 rmmod nvme_keyring 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4135516 ']' 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4135516 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 4135516 ']' 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 4135516 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4135516 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:05.303 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4135516' 00:36:05.303 killing process with pid 4135516 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 4135516 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 4135516 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:05.304 10:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.687 10:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:06.687 00:36:06.687 real 0m24.514s 00:36:06.687 user 0m55.699s 00:36:06.687 sys 0m11.083s 00:36:06.687 10:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:06.687 10:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:06.687 ************************************ 00:36:06.687 END TEST nvmf_lvol 00:36:06.687 ************************************ 00:36:06.687 10:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:06.687 10:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:06.687 10:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:06.687 10:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:06.687 ************************************ 00:36:06.687 START TEST nvmf_lvs_grow 00:36:06.687 ************************************ 00:36:06.687 10:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:06.687 * Looking for test storage... 00:36:06.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:36:06.687 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:06.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.688 --rc genhtml_branch_coverage=1 00:36:06.688 --rc genhtml_function_coverage=1 00:36:06.688 --rc genhtml_legend=1 00:36:06.688 --rc geninfo_all_blocks=1 00:36:06.688 --rc geninfo_unexecuted_blocks=1 00:36:06.688 00:36:06.688 ' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:06.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.688 --rc genhtml_branch_coverage=1 00:36:06.688 --rc genhtml_function_coverage=1 00:36:06.688 --rc genhtml_legend=1 00:36:06.688 --rc geninfo_all_blocks=1 00:36:06.688 --rc geninfo_unexecuted_blocks=1 00:36:06.688 00:36:06.688 ' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:06.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.688 --rc genhtml_branch_coverage=1 00:36:06.688 --rc genhtml_function_coverage=1 00:36:06.688 --rc genhtml_legend=1 00:36:06.688 --rc geninfo_all_blocks=1 00:36:06.688 --rc geninfo_unexecuted_blocks=1 00:36:06.688 00:36:06.688 ' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:06.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.688 --rc genhtml_branch_coverage=1 00:36:06.688 --rc genhtml_function_coverage=1 00:36:06.688 --rc genhtml_legend=1 00:36:06.688 --rc geninfo_all_blocks=1 00:36:06.688 --rc geninfo_unexecuted_blocks=1 00:36:06.688 00:36:06.688 ' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:36:06.688 10:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:16.686 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:16.687 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:16.687 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:16.687 Found net devices under 0000:31:00.0: cvl_0_0 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:16.687 Found net devices under 0000:31:00.1: cvl_0_1 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:16.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:16.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:36:16.687 00:36:16.687 --- 10.0.0.2 ping statistics --- 00:36:16.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:16.687 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:16.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:16.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:36:16.687 00:36:16.687 --- 10.0.0.1 ping statistics --- 00:36:16.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:16.687 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4142836 00:36:16.687 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4142836 00:36:16.688 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:36:16.688 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 4142836 ']' 00:36:16.688 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.688 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:16.688 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.688 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:16.688 10:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:16.688 [2024-11-06 10:28:18.800219] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:16.688 [2024-11-06 10:28:18.801358] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:16.688 [2024-11-06 10:28:18.801411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:16.688 [2024-11-06 10:28:18.891481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.688 [2024-11-06 10:28:18.931975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:16.688 [2024-11-06 10:28:18.932009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:16.688 [2024-11-06 10:28:18.932017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:16.688 [2024-11-06 10:28:18.932024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:16.688 [2024-11-06 10:28:18.932030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:16.688 [2024-11-06 10:28:18.932649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.688 [2024-11-06 10:28:18.987764] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:16.688 [2024-11-06 10:28:18.988029] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:16.688 [2024-11-06 10:28:19.797470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:16.688 ************************************ 00:36:16.688 START TEST lvs_grow_clean 00:36:16.688 ************************************ 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:16.688 10:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:16.688 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:16.688 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:16.948 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=21350299-764f-46a2-bcab-7dbd8da79f29 00:36:16.948 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21350299-764f-46a2-bcab-7dbd8da79f29 00:36:16.948 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:16.948 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:16.948 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:16.948 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 21350299-764f-46a2-bcab-7dbd8da79f29 lvol 150 00:36:17.209 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c1a47b39-466c-4ffd-b5d3-2d0234647f77 00:36:17.209 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:17.209 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:17.470 [2024-11-06 10:28:20.773118] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:17.470 [2024-11-06 10:28:20.773277] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:17.470 true 00:36:17.470 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21350299-764f-46a2-bcab-7dbd8da79f29 00:36:17.470 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:17.730 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:17.730 10:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:17.730 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c1a47b39-466c-4ffd-b5d3-2d0234647f77 00:36:17.991 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.991 [2024-11-06 10:28:21.473691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:18.252 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:18.252 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4143299 00:36:18.252 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:18.252 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:18.252 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4143299 /var/tmp/bdevperf.sock 00:36:18.252 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 4143299 ']' 00:36:18.252 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:18.252 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:18.252 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:18.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:18.252 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:18.252 10:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:18.252 [2024-11-06 10:28:21.727851] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:18.252 [2024-11-06 10:28:21.727958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143299 ] 00:36:18.513 [2024-11-06 10:28:21.828709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.513 [2024-11-06 10:28:21.880382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.087 10:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:19.087 10:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:36:19.087 10:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:19.659 Nvme0n1 00:36:19.659 10:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:19.659 [ 00:36:19.659 { 00:36:19.659 "name": "Nvme0n1", 00:36:19.659 "aliases": [ 00:36:19.659 "c1a47b39-466c-4ffd-b5d3-2d0234647f77" 00:36:19.659 ], 00:36:19.659 "product_name": "NVMe disk", 00:36:19.659 "block_size": 4096, 00:36:19.659 "num_blocks": 38912, 00:36:19.659 "uuid": "c1a47b39-466c-4ffd-b5d3-2d0234647f77", 00:36:19.659 "numa_id": 0, 00:36:19.659 "assigned_rate_limits": { 00:36:19.659 "rw_ios_per_sec": 0, 00:36:19.659 "rw_mbytes_per_sec": 0, 00:36:19.659 "r_mbytes_per_sec": 0, 00:36:19.659 "w_mbytes_per_sec": 0 00:36:19.659 }, 00:36:19.659 "claimed": false, 00:36:19.659 "zoned": false, 00:36:19.659 "supported_io_types": { 00:36:19.659 "read": true, 00:36:19.659 "write": true, 00:36:19.659 "unmap": true, 00:36:19.659 "flush": true, 00:36:19.659 "reset": true, 00:36:19.659 "nvme_admin": true, 00:36:19.659 "nvme_io": true, 00:36:19.659 "nvme_io_md": false, 00:36:19.659 "write_zeroes": true, 00:36:19.659 "zcopy": false, 00:36:19.659 "get_zone_info": false, 00:36:19.659 "zone_management": false, 00:36:19.659 "zone_append": false, 00:36:19.659 "compare": true, 00:36:19.659 "compare_and_write": true, 00:36:19.659 "abort": true, 00:36:19.659 "seek_hole": false, 00:36:19.659 "seek_data": false, 00:36:19.659 "copy": true, 00:36:19.659 "nvme_iov_md": false 00:36:19.659 }, 00:36:19.659 "memory_domains": [ 00:36:19.659 { 00:36:19.659 "dma_device_id": "system", 00:36:19.659 "dma_device_type": 1 00:36:19.659 } 00:36:19.659 ], 00:36:19.659 "driver_specific": { 00:36:19.659 "nvme": [ 00:36:19.659 { 00:36:19.659 "trid": { 00:36:19.659 "trtype": "TCP", 00:36:19.659 "adrfam": "IPv4", 00:36:19.659 "traddr": "10.0.0.2", 00:36:19.659 "trsvcid": "4420", 00:36:19.659 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:19.659 }, 00:36:19.659 "ctrlr_data": { 00:36:19.659 "cntlid": 1, 00:36:19.659 "vendor_id": "0x8086", 00:36:19.659 "model_number": "SPDK bdev Controller", 00:36:19.659 "serial_number": "SPDK0", 00:36:19.659 "firmware_revision": "25.01", 00:36:19.659 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:19.659 "oacs": { 00:36:19.659 "security": 0, 00:36:19.659 "format": 0, 00:36:19.659 "firmware": 0, 00:36:19.659 "ns_manage": 0 00:36:19.659 }, 00:36:19.659 "multi_ctrlr": true, 00:36:19.659 "ana_reporting": false 00:36:19.659 }, 00:36:19.659 "vs": { 00:36:19.659 "nvme_version": "1.3" 00:36:19.659 }, 00:36:19.659 "ns_data": { 00:36:19.659 "id": 1, 00:36:19.659 "can_share": true 00:36:19.659 } 00:36:19.659 } 00:36:19.659 ], 00:36:19.659 "mp_policy": "active_passive" 00:36:19.659 } 00:36:19.659 } 00:36:19.659 ] 00:36:19.920 10:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4143630 00:36:19.920 10:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:19.920 10:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:19.920 Running I/O for 10 seconds... 00:36:20.863 Latency(us) 00:36:20.863 [2024-11-06T09:28:24.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:20.863 Nvme0n1 : 1.00 17657.00 68.97 0.00 0.00 0.00 0.00 0.00 00:36:20.863 [2024-11-06T09:28:24.364Z] =================================================================================================================== 00:36:20.863 [2024-11-06T09:28:24.364Z] Total : 17657.00 68.97 0.00 0.00 0.00 0.00 0.00 00:36:20.863 00:36:21.803 10:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 21350299-764f-46a2-bcab-7dbd8da79f29 00:36:21.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:21.803 Nvme0n1 : 2.00 17782.00 69.46 0.00 0.00 0.00 0.00 0.00 00:36:21.803 [2024-11-06T09:28:25.304Z] =================================================================================================================== 00:36:21.803 [2024-11-06T09:28:25.304Z] Total : 17782.00 69.46 0.00 0.00 0.00 0.00 0.00 00:36:21.803 00:36:22.064 true 00:36:22.064 10:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21350299-764f-46a2-bcab-7dbd8da79f29 00:36:22.064 10:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:22.064 10:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:22.064 10:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:22.064 10:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4143630 00:36:23.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:23.006 Nvme0n1 : 3.00 17823.67 69.62 0.00 0.00 0.00 0.00 0.00 00:36:23.006 [2024-11-06T09:28:26.507Z] =================================================================================================================== 00:36:23.006 [2024-11-06T09:28:26.507Z] Total : 17823.67 69.62 0.00 0.00 0.00 0.00 0.00 00:36:23.006 00:36:23.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:23.948 Nvme0n1 : 4.00 17876.25 69.83 0.00 0.00 0.00 0.00 0.00 00:36:23.948 [2024-11-06T09:28:27.449Z] =================================================================================================================== 00:36:23.948 [2024-11-06T09:28:27.449Z] Total : 17876.25 69.83 0.00 0.00 0.00 0.00 0.00 00:36:23.948 00:36:24.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:24.890 Nvme0n1 : 5.00 17907.80 69.95 0.00 0.00 0.00 0.00 0.00 00:36:24.890 [2024-11-06T09:28:28.391Z] =================================================================================================================== 00:36:24.890 [2024-11-06T09:28:28.391Z] Total : 17907.80 69.95 0.00 0.00 0.00 0.00 0.00 00:36:24.890 00:36:25.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:25.831 Nvme0n1 : 6.00 17928.83 70.03 0.00 0.00 0.00 0.00 0.00 00:36:25.831 [2024-11-06T09:28:29.332Z] =================================================================================================================== 00:36:25.831 [2024-11-06T09:28:29.332Z] Total : 17928.83 70.03 0.00 0.00 0.00 0.00 0.00 00:36:25.831 00:36:26.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:26.837 Nvme0n1 : 7.00 17943.86 70.09 0.00 0.00 0.00 0.00 0.00 00:36:26.837 [2024-11-06T09:28:30.338Z] =================================================================================================================== 00:36:26.837 [2024-11-06T09:28:30.338Z] Total : 17943.86 70.09 0.00 0.00 0.00 0.00 0.00 00:36:26.837 00:36:27.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:27.778 Nvme0n1 : 8.00 17955.12 70.14 0.00 0.00 0.00 0.00 0.00 00:36:27.778 [2024-11-06T09:28:31.279Z] =================================================================================================================== 00:36:27.778 [2024-11-06T09:28:31.279Z] Total : 17955.12 70.14 0.00 0.00 0.00 0.00 0.00 00:36:27.778 00:36:29.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:29.161 Nvme0n1 : 9.00 17963.89 70.17 0.00 0.00 0.00 0.00 0.00 00:36:29.161 [2024-11-06T09:28:32.662Z] =================================================================================================================== 00:36:29.161 [2024-11-06T09:28:32.662Z] Total : 17963.89 70.17 0.00 0.00 0.00 0.00 0.00 00:36:29.161 00:36:30.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:30.102 Nvme0n1 : 10.00 17979.00 70.23 0.00 0.00 0.00 0.00 0.00 00:36:30.102 [2024-11-06T09:28:33.603Z] =================================================================================================================== 00:36:30.102 [2024-11-06T09:28:33.603Z] Total : 17979.00 70.23 0.00 0.00 0.00 0.00 0.00 00:36:30.102 00:36:30.102 00:36:30.102 Latency(us) 00:36:30.102 [2024-11-06T09:28:33.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:30.103 Nvme0n1 : 10.01 17982.98 70.25 0.00 0.00 7114.79 2362.03 13598.72 00:36:30.103 [2024-11-06T09:28:33.604Z] =================================================================================================================== 00:36:30.103 [2024-11-06T09:28:33.604Z] Total : 17982.98 70.25 0.00 0.00 7114.79 2362.03 13598.72 00:36:30.103 { 00:36:30.103 "results": [ 00:36:30.103 { 00:36:30.103 "job": "Nvme0n1", 00:36:30.103 "core_mask": "0x2", 00:36:30.103 "workload": "randwrite", 00:36:30.103 "status": "finished", 00:36:30.103 "queue_depth": 128, 00:36:30.103 "io_size": 4096, 00:36:30.103 "runtime": 10.007464, 00:36:30.103 "iops": 17982.977505589828, 00:36:30.103 "mibps": 70.24600588121027, 00:36:30.103 "io_failed": 0, 00:36:30.103 "io_timeout": 0, 00:36:30.103 "avg_latency_us": 7114.789447963666, 00:36:30.103 "min_latency_us": 2362.0266666666666, 00:36:30.103 "max_latency_us": 13598.72 00:36:30.103 } 00:36:30.103 ], 00:36:30.103 "core_count": 1 00:36:30.103 } 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4143299 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 4143299 ']' 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 4143299 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4143299 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4143299' 00:36:30.103 killing process with pid 4143299 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 4143299 00:36:30.103 Received shutdown signal, test time was about 10.000000 seconds 00:36:30.103 00:36:30.103 Latency(us) 00:36:30.103 [2024-11-06T09:28:33.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.103 [2024-11-06T09:28:33.604Z] =================================================================================================================== 00:36:30.103 [2024-11-06T09:28:33.604Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 4143299 00:36:30.103 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:30.363 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:30.363 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:30.363 10:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21350299-764f-46a2-bcab-7dbd8da79f29 00:36:30.623 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:30.623 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:36:30.623 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:30.884 [2024-11-06 10:28:34.157208] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21350299-764f-46a2-bcab-7dbd8da79f29 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21350299-764f-46a2-bcab-7dbd8da79f29 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21350299-764f-46a2-bcab-7dbd8da79f29 00:36:30.884 request: 00:36:30.884 { 00:36:30.884 "uuid": "21350299-764f-46a2-bcab-7dbd8da79f29", 00:36:30.884 "method": "bdev_lvol_get_lvstores", 00:36:30.884 "req_id": 1 00:36:30.884 } 00:36:30.884 Got JSON-RPC error response 00:36:30.884 response: 00:36:30.884 { 00:36:30.884 "code": -19, 00:36:30.884 "message": "No such device" 00:36:30.884 } 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:30.884 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:30.885 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:31.145 aio_bdev 00:36:31.145 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c1a47b39-466c-4ffd-b5d3-2d0234647f77 00:36:31.145 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=c1a47b39-466c-4ffd-b5d3-2d0234647f77 00:36:31.145 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:36:31.145 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:36:31.145 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:36:31.145 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:36:31.145 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:31.406 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c1a47b39-466c-4ffd-b5d3-2d0234647f77 -t 2000 00:36:31.406 [ 00:36:31.406 { 00:36:31.407 "name": "c1a47b39-466c-4ffd-b5d3-2d0234647f77", 00:36:31.407 "aliases": [ 00:36:31.407 "lvs/lvol" 00:36:31.407 ], 00:36:31.407 "product_name": "Logical Volume", 00:36:31.407 "block_size": 4096, 00:36:31.407 "num_blocks": 38912, 00:36:31.407 "uuid": "c1a47b39-466c-4ffd-b5d3-2d0234647f77", 00:36:31.407 "assigned_rate_limits": { 00:36:31.407 "rw_ios_per_sec": 0, 00:36:31.407 "rw_mbytes_per_sec": 0, 00:36:31.407 "r_mbytes_per_sec": 0, 00:36:31.407 "w_mbytes_per_sec": 0 00:36:31.407 }, 00:36:31.407 "claimed": false, 00:36:31.407 "zoned": false, 00:36:31.407 "supported_io_types": { 00:36:31.407 "read": true, 00:36:31.407 "write": true, 00:36:31.407 "unmap": true, 00:36:31.407 "flush": false, 00:36:31.407 "reset": true, 00:36:31.407 "nvme_admin": false, 00:36:31.407 "nvme_io": false, 00:36:31.407 "nvme_io_md": false, 00:36:31.407 "write_zeroes": true, 00:36:31.407 "zcopy": false, 00:36:31.407 "get_zone_info": false, 00:36:31.407 "zone_management": false, 00:36:31.407 "zone_append": false, 00:36:31.407 "compare": false, 00:36:31.407 "compare_and_write": false, 00:36:31.407 "abort": false, 00:36:31.407 "seek_hole": true, 00:36:31.407 "seek_data": true, 00:36:31.407 "copy": false, 00:36:31.407 "nvme_iov_md": false 00:36:31.407 }, 00:36:31.407 "driver_specific": { 00:36:31.407 "lvol": { 00:36:31.407 "lvol_store_uuid": "21350299-764f-46a2-bcab-7dbd8da79f29", 00:36:31.407 "base_bdev": "aio_bdev", 00:36:31.407 "thin_provision": false, 00:36:31.407 "num_allocated_clusters": 38, 00:36:31.407 "snapshot": false, 00:36:31.407 "clone": false, 00:36:31.407 "esnap_clone": false 00:36:31.407 } 00:36:31.407 } 00:36:31.407 } 00:36:31.407 ] 00:36:31.407 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:36:31.407 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21350299-764f-46a2-bcab-7dbd8da79f29 00:36:31.407 10:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:31.668 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:31.668 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21350299-764f-46a2-bcab-7dbd8da79f29 00:36:31.668 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:31.928 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:31.928 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c1a47b39-466c-4ffd-b5d3-2d0234647f77 00:36:31.928 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21350299-764f-46a2-bcab-7dbd8da79f29 00:36:32.188 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:32.449 00:36:32.449 real 0m15.931s 00:36:32.449 user 0m15.630s 00:36:32.449 sys 0m1.458s 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:32.449 ************************************ 00:36:32.449 END TEST lvs_grow_clean 00:36:32.449 ************************************ 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:32.449 ************************************ 00:36:32.449 START TEST lvs_grow_dirty 00:36:32.449 ************************************ 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:32.449 10:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:32.709 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:32.709 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:32.970 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=61c659e0-a50f-4683-9686-492b06b899c5 00:36:32.970 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:32.970 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:32.970 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:32.970 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:32.970 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61c659e0-a50f-4683-9686-492b06b899c5 lvol 150 00:36:33.231 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=175de459-f7ed-493e-994d-1980cc4c08ec 00:36:33.231 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:33.231 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:33.492 [2024-11-06 10:28:36.753026] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:33.492 [2024-11-06 10:28:36.753120] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:33.492 true 00:36:33.492 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:33.492 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:33.492 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:33.492 10:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:33.752 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 175de459-f7ed-493e-994d-1980cc4c08ec 00:36:34.013 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:34.013 [2024-11-06 10:28:37.433371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:34.013 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:34.274 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4146374 00:36:34.274 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:34.274 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4146374 /var/tmp/bdevperf.sock 00:36:34.274 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:34.274 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 4146374 ']' 00:36:34.274 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:34.274 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:34.274 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:34.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:34.274 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:34.274 10:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:34.274 [2024-11-06 10:28:37.682882] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:34.274 [2024-11-06 10:28:37.682939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4146374 ] 00:36:34.274 [2024-11-06 10:28:37.773359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.534 [2024-11-06 10:28:37.804286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.211 10:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:35.211 10:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:36:35.211 10:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:35.471 Nvme0n1 00:36:35.471 10:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:35.731 [ 00:36:35.731 { 00:36:35.731 "name": "Nvme0n1", 00:36:35.731 "aliases": [ 00:36:35.731 "175de459-f7ed-493e-994d-1980cc4c08ec" 00:36:35.731 ], 00:36:35.731 "product_name": "NVMe disk", 00:36:35.731 "block_size": 4096, 00:36:35.731 "num_blocks": 38912, 00:36:35.731 "uuid": "175de459-f7ed-493e-994d-1980cc4c08ec", 00:36:35.731 "numa_id": 0, 00:36:35.731 "assigned_rate_limits": { 00:36:35.731 "rw_ios_per_sec": 0, 00:36:35.731 "rw_mbytes_per_sec": 0, 00:36:35.731 "r_mbytes_per_sec": 0, 00:36:35.731 "w_mbytes_per_sec": 0 00:36:35.731 }, 00:36:35.731 "claimed": false, 00:36:35.731 "zoned": false, 00:36:35.731 "supported_io_types": { 00:36:35.731 "read": true, 00:36:35.731 "write": true, 00:36:35.731 "unmap": true, 00:36:35.731 "flush": true, 00:36:35.732 "reset": true, 00:36:35.732 "nvme_admin": true, 00:36:35.732 "nvme_io": true, 00:36:35.732 "nvme_io_md": false, 00:36:35.732 "write_zeroes": true, 00:36:35.732 "zcopy": false, 00:36:35.732 "get_zone_info": false, 00:36:35.732 "zone_management": false, 00:36:35.732 "zone_append": false, 00:36:35.732 "compare": true, 00:36:35.732 "compare_and_write": true, 00:36:35.732 "abort": true, 00:36:35.732 "seek_hole": false, 00:36:35.732 "seek_data": false, 00:36:35.732 "copy": true, 00:36:35.732 "nvme_iov_md": false 00:36:35.732 }, 00:36:35.732 "memory_domains": [ 00:36:35.732 { 00:36:35.732 "dma_device_id": "system", 00:36:35.732 "dma_device_type": 1 00:36:35.732 } 00:36:35.732 ], 00:36:35.732 "driver_specific": { 00:36:35.732 "nvme": [ 00:36:35.732 { 00:36:35.732 "trid": { 00:36:35.732 "trtype": "TCP", 00:36:35.732 "adrfam": "IPv4", 00:36:35.732 "traddr": "10.0.0.2", 00:36:35.732 "trsvcid": "4420", 00:36:35.732 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:35.732 }, 00:36:35.732 "ctrlr_data": { 00:36:35.732 "cntlid": 1, 00:36:35.732 "vendor_id": "0x8086", 00:36:35.732 "model_number": "SPDK bdev Controller", 00:36:35.732 "serial_number": "SPDK0", 00:36:35.732 "firmware_revision": "25.01", 00:36:35.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:35.732 "oacs": { 00:36:35.732 "security": 0, 00:36:35.732 "format": 0, 00:36:35.732 "firmware": 0, 00:36:35.732 "ns_manage": 0 00:36:35.732 }, 00:36:35.732 "multi_ctrlr": true, 00:36:35.732 "ana_reporting": false 00:36:35.732 }, 00:36:35.732 "vs": { 00:36:35.732 "nvme_version": "1.3" 00:36:35.732 }, 00:36:35.732 "ns_data": { 00:36:35.732 "id": 1, 00:36:35.732 "can_share": true 00:36:35.732 } 00:36:35.732 } 00:36:35.732 ], 00:36:35.732 "mp_policy": "active_passive" 00:36:35.732 } 00:36:35.732 } 00:36:35.732 ] 00:36:35.732 10:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:35.732 10:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4146714 00:36:35.732 10:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:35.732 Running I/O for 10 seconds... 00:36:36.675 Latency(us) 00:36:36.675 [2024-11-06T09:28:40.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:36.675 Nvme0n1 : 1.00 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:36:36.675 [2024-11-06T09:28:40.176Z] =================================================================================================================== 00:36:36.675 [2024-11-06T09:28:40.176Z] Total : 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:36:36.675 00:36:37.616 10:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:37.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:37.616 Nvme0n1 : 2.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:36:37.616 [2024-11-06T09:28:41.117Z] =================================================================================================================== 00:36:37.616 [2024-11-06T09:28:41.117Z] Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:36:37.616 00:36:37.877 true 00:36:37.877 10:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:37.877 10:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:38.137 10:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:38.137 10:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:38.137 10:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4146714 00:36:38.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:38.708 Nvme0n1 : 3.00 17822.33 69.62 0.00 0.00 0.00 0.00 0.00 00:36:38.708 [2024-11-06T09:28:42.209Z] =================================================================================================================== 00:36:38.708 [2024-11-06T09:28:42.209Z] Total : 17822.33 69.62 0.00 0.00 0.00 0.00 0.00 00:36:38.708 00:36:39.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:39.647 Nvme0n1 : 4.00 17875.25 69.83 0.00 0.00 0.00 0.00 0.00 00:36:39.647 [2024-11-06T09:28:43.148Z] =================================================================================================================== 00:36:39.647 [2024-11-06T09:28:43.148Z] Total : 17875.25 69.83 0.00 0.00 0.00 0.00 0.00 00:36:39.647 00:36:41.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:41.031 Nvme0n1 : 5.00 17881.60 69.85 0.00 0.00 0.00 0.00 0.00 00:36:41.031 [2024-11-06T09:28:44.532Z] =================================================================================================================== 00:36:41.031 [2024-11-06T09:28:44.532Z] Total : 17881.60 69.85 0.00 0.00 0.00 0.00 0.00 00:36:41.031 00:36:41.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:41.971 Nvme0n1 : 6.00 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:36:41.971 [2024-11-06T09:28:45.472Z] =================================================================================================================== 00:36:41.971 [2024-11-06T09:28:45.472Z] Total : 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:36:41.971 00:36:42.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:42.912 Nvme0n1 : 7.00 17925.14 70.02 0.00 0.00 0.00 0.00 0.00 00:36:42.912 [2024-11-06T09:28:46.413Z] =================================================================================================================== 00:36:42.912 [2024-11-06T09:28:46.413Z] Total : 17925.14 70.02 0.00 0.00 0.00 0.00 0.00 00:36:42.912 00:36:43.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:43.853 Nvme0n1 : 8.00 17938.75 70.07 0.00 0.00 0.00 0.00 0.00 00:36:43.853 [2024-11-06T09:28:47.354Z] =================================================================================================================== 00:36:43.853 [2024-11-06T09:28:47.354Z] Total : 17938.75 70.07 0.00 0.00 0.00 0.00 0.00 00:36:43.853 00:36:44.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:44.793 Nvme0n1 : 9.00 17949.33 70.11 0.00 0.00 0.00 0.00 0.00 00:36:44.793 [2024-11-06T09:28:48.294Z] =================================================================================================================== 00:36:44.793 [2024-11-06T09:28:48.294Z] Total : 17949.33 70.11 0.00 0.00 0.00 0.00 0.00 00:36:44.793 00:36:45.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:45.733 Nvme0n1 : 10.00 17957.80 70.15 0.00 0.00 0.00 0.00 0.00 00:36:45.733 [2024-11-06T09:28:49.234Z] =================================================================================================================== 00:36:45.733 [2024-11-06T09:28:49.234Z] Total : 17957.80 70.15 0.00 0.00 0.00 0.00 0.00 00:36:45.733 00:36:45.733 00:36:45.733 Latency(us) 00:36:45.733 [2024-11-06T09:28:49.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:45.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:45.733 Nvme0n1 : 10.00 17966.47 70.18 0.00 0.00 7121.83 6526.29 14417.92 00:36:45.733 [2024-11-06T09:28:49.234Z] =================================================================================================================== 00:36:45.733 [2024-11-06T09:28:49.234Z] Total : 17966.47 70.18 0.00 0.00 7121.83 6526.29 14417.92 00:36:45.733 { 00:36:45.733 "results": [ 00:36:45.733 { 00:36:45.733 "job": "Nvme0n1", 00:36:45.733 "core_mask": "0x2", 00:36:45.733 "workload": "randwrite", 00:36:45.733 "status": "finished", 00:36:45.733 "queue_depth": 128, 00:36:45.733 "io_size": 4096, 00:36:45.733 "runtime": 10.002301, 00:36:45.733 "iops": 17966.465916192683, 00:36:45.733 "mibps": 70.18150748512767, 00:36:45.733 "io_failed": 0, 00:36:45.733 "io_timeout": 0, 00:36:45.733 "avg_latency_us": 7121.832836596071, 00:36:45.733 "min_latency_us": 6526.293333333333, 00:36:45.733 "max_latency_us": 14417.92 00:36:45.733 } 00:36:45.733 ], 00:36:45.733 "core_count": 1 00:36:45.733 } 00:36:45.733 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4146374 00:36:45.734 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 4146374 ']' 00:36:45.734 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 4146374 00:36:45.734 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:36:45.734 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:45.734 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4146374 00:36:45.734 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:45.734 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:45.734 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4146374' 00:36:45.734 killing process with pid 4146374 00:36:45.734 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 4146374 00:36:45.734 Received shutdown signal, test time was about 10.000000 seconds 00:36:45.734 00:36:45.734 Latency(us) 00:36:45.734 [2024-11-06T09:28:49.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:45.734 [2024-11-06T09:28:49.235Z] =================================================================================================================== 00:36:45.734 [2024-11-06T09:28:49.235Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:45.734 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 4146374 00:36:45.994 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:45.994 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:46.255 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:46.255 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4142836 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4142836 00:36:46.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4142836 Killed "${NVMF_APP[@]}" "$@" 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4148727 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4148727 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 4148727 ']' 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:46.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:46.516 10:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:46.516 [2024-11-06 10:28:49.927979] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:46.516 [2024-11-06 10:28:49.929085] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:46.516 [2024-11-06 10:28:49.929143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:46.776 [2024-11-06 10:28:50.019218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.776 [2024-11-06 10:28:50.064724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:46.776 [2024-11-06 10:28:50.064767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:46.776 [2024-11-06 10:28:50.064776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:46.776 [2024-11-06 10:28:50.064783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:46.776 [2024-11-06 10:28:50.064789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:46.776 [2024-11-06 10:28:50.065359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:46.776 [2024-11-06 10:28:50.123665] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:46.776 [2024-11-06 10:28:50.123931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:47.347 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:47.347 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:36:47.347 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:47.347 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:47.347 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:47.347 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:47.347 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:47.607 [2024-11-06 10:28:50.972384] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:36:47.607 [2024-11-06 10:28:50.972515] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:36:47.607 [2024-11-06 10:28:50.972548] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:36:47.607 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:36:47.607 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 175de459-f7ed-493e-994d-1980cc4c08ec 00:36:47.607 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=175de459-f7ed-493e-994d-1980cc4c08ec 00:36:47.607 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:36:47.607 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:36:47.607 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:36:47.607 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:36:47.607 10:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:47.868 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 175de459-f7ed-493e-994d-1980cc4c08ec -t 2000 00:36:47.868 [ 00:36:47.868 { 00:36:47.868 "name": "175de459-f7ed-493e-994d-1980cc4c08ec", 00:36:47.868 "aliases": [ 00:36:47.868 "lvs/lvol" 00:36:47.868 ], 00:36:47.868 "product_name": "Logical Volume", 00:36:47.868 "block_size": 4096, 00:36:47.868 "num_blocks": 38912, 00:36:47.868 "uuid": "175de459-f7ed-493e-994d-1980cc4c08ec", 00:36:47.868 "assigned_rate_limits": { 00:36:47.868 "rw_ios_per_sec": 0, 00:36:47.868 "rw_mbytes_per_sec": 0, 00:36:47.868 "r_mbytes_per_sec": 0, 00:36:47.868 "w_mbytes_per_sec": 0 00:36:47.868 }, 00:36:47.868 "claimed": false, 00:36:47.868 "zoned": false, 00:36:47.868 "supported_io_types": { 00:36:47.868 "read": true, 00:36:47.868 "write": true, 00:36:47.868 "unmap": true, 00:36:47.868 "flush": false, 00:36:47.868 "reset": true, 00:36:47.868 "nvme_admin": false, 00:36:47.868 "nvme_io": false, 00:36:47.868 "nvme_io_md": false, 00:36:47.868 "write_zeroes": true, 00:36:47.868 "zcopy": false, 00:36:47.868 "get_zone_info": false, 00:36:47.868 "zone_management": false, 00:36:47.868 "zone_append": false, 00:36:47.868 "compare": false, 00:36:47.868 "compare_and_write": false, 00:36:47.868 "abort": false, 00:36:47.868 "seek_hole": true, 00:36:47.868 "seek_data": true, 00:36:47.868 "copy": false, 00:36:47.868 "nvme_iov_md": false 00:36:47.868 }, 00:36:47.868 "driver_specific": { 00:36:47.868 "lvol": { 00:36:47.868 "lvol_store_uuid": "61c659e0-a50f-4683-9686-492b06b899c5", 00:36:47.868 "base_bdev": "aio_bdev", 00:36:47.868 "thin_provision": false, 00:36:47.868 "num_allocated_clusters": 38, 00:36:47.868 "snapshot": false, 00:36:47.868 "clone": false, 00:36:47.868 "esnap_clone": false 00:36:47.868 } 00:36:47.868 } 00:36:47.868 } 00:36:47.868 ] 00:36:47.868 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:36:47.868 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:47.868 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:36:48.129 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:36:48.129 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:48.129 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:36:48.389 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:36:48.389 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:48.389 [2024-11-06 10:28:51.873982] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:48.650 10:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:48.650 request: 00:36:48.650 { 00:36:48.650 "uuid": "61c659e0-a50f-4683-9686-492b06b899c5", 00:36:48.650 "method": "bdev_lvol_get_lvstores", 00:36:48.650 "req_id": 1 00:36:48.650 } 00:36:48.650 Got JSON-RPC error response 00:36:48.650 response: 00:36:48.650 { 00:36:48.650 "code": -19, 00:36:48.650 "message": "No such device" 00:36:48.650 } 00:36:48.650 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:36:48.650 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:48.650 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:48.650 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:48.650 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:48.910 aio_bdev 00:36:48.910 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 175de459-f7ed-493e-994d-1980cc4c08ec 00:36:48.910 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=175de459-f7ed-493e-994d-1980cc4c08ec 00:36:48.910 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:36:48.910 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:36:48.910 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:36:48.910 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:36:48.910 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:49.171 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 175de459-f7ed-493e-994d-1980cc4c08ec -t 2000 00:36:49.171 [ 00:36:49.171 { 00:36:49.171 "name": "175de459-f7ed-493e-994d-1980cc4c08ec", 00:36:49.171 "aliases": [ 00:36:49.171 "lvs/lvol" 00:36:49.171 ], 00:36:49.171 "product_name": "Logical Volume", 00:36:49.171 "block_size": 4096, 00:36:49.171 "num_blocks": 38912, 00:36:49.171 "uuid": "175de459-f7ed-493e-994d-1980cc4c08ec", 00:36:49.171 "assigned_rate_limits": { 00:36:49.171 "rw_ios_per_sec": 0, 00:36:49.171 "rw_mbytes_per_sec": 0, 00:36:49.171 "r_mbytes_per_sec": 0, 00:36:49.171 "w_mbytes_per_sec": 0 00:36:49.171 }, 00:36:49.171 "claimed": false, 00:36:49.171 "zoned": false, 00:36:49.171 "supported_io_types": { 00:36:49.171 "read": true, 00:36:49.171 "write": true, 00:36:49.171 "unmap": true, 00:36:49.171 "flush": false, 00:36:49.171 "reset": true, 00:36:49.171 "nvme_admin": false, 00:36:49.171 "nvme_io": false, 00:36:49.171 "nvme_io_md": false, 00:36:49.171 "write_zeroes": true, 00:36:49.171 "zcopy": false, 00:36:49.171 "get_zone_info": false, 00:36:49.171 "zone_management": false, 00:36:49.171 "zone_append": false, 00:36:49.171 "compare": false, 00:36:49.171 "compare_and_write": false, 00:36:49.171 "abort": false, 00:36:49.171 "seek_hole": true, 00:36:49.171 "seek_data": true, 00:36:49.171 "copy": false, 00:36:49.171 "nvme_iov_md": false 00:36:49.171 }, 00:36:49.171 "driver_specific": { 00:36:49.171 "lvol": { 00:36:49.171 "lvol_store_uuid": "61c659e0-a50f-4683-9686-492b06b899c5", 00:36:49.171 "base_bdev": "aio_bdev", 00:36:49.171 "thin_provision": false, 00:36:49.171 "num_allocated_clusters": 38, 00:36:49.171 "snapshot": false, 00:36:49.171 "clone": false, 00:36:49.171 "esnap_clone": false 00:36:49.171 } 00:36:49.171 } 00:36:49.171 } 00:36:49.171 ] 00:36:49.171 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:36:49.171 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:49.171 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:49.432 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:49.432 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:49.432 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:49.692 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:49.692 10:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 175de459-f7ed-493e-994d-1980cc4c08ec 00:36:49.692 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61c659e0-a50f-4683-9686-492b06b899c5 00:36:49.952 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:50.213 00:36:50.213 real 0m17.705s 00:36:50.213 user 0m35.571s 00:36:50.213 sys 0m3.042s 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:50.213 ************************************ 00:36:50.213 END TEST lvs_grow_dirty 00:36:50.213 ************************************ 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:50.213 nvmf_trace.0 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.213 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.213 rmmod nvme_tcp 00:36:50.213 rmmod nvme_fabrics 00:36:50.475 rmmod nvme_keyring 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4148727 ']' 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4148727 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 4148727 ']' 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 4148727 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4148727 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4148727' 00:36:50.475 killing process with pid 4148727 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 4148727 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 4148727 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.475 10:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:53.022 00:36:53.022 real 0m46.083s 00:36:53.022 user 0m54.485s 00:36:53.022 sys 0m11.366s 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:53.022 ************************************ 00:36:53.022 END TEST nvmf_lvs_grow 00:36:53.022 ************************************ 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:53.022 ************************************ 00:36:53.022 START TEST nvmf_bdev_io_wait 00:36:53.022 ************************************ 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:53.022 * Looking for test storage... 00:36:53.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:53.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.022 --rc genhtml_branch_coverage=1 00:36:53.022 --rc genhtml_function_coverage=1 00:36:53.022 --rc genhtml_legend=1 00:36:53.022 --rc geninfo_all_blocks=1 00:36:53.022 --rc geninfo_unexecuted_blocks=1 00:36:53.022 00:36:53.022 ' 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:53.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.022 --rc genhtml_branch_coverage=1 00:36:53.022 --rc genhtml_function_coverage=1 00:36:53.022 --rc genhtml_legend=1 00:36:53.022 --rc geninfo_all_blocks=1 00:36:53.022 --rc geninfo_unexecuted_blocks=1 00:36:53.022 00:36:53.022 ' 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:53.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.022 --rc genhtml_branch_coverage=1 00:36:53.022 --rc genhtml_function_coverage=1 00:36:53.022 --rc genhtml_legend=1 00:36:53.022 --rc geninfo_all_blocks=1 00:36:53.022 --rc geninfo_unexecuted_blocks=1 00:36:53.022 00:36:53.022 ' 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:53.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.022 --rc genhtml_branch_coverage=1 00:36:53.022 --rc genhtml_function_coverage=1 00:36:53.022 --rc genhtml_legend=1 00:36:53.022 --rc geninfo_all_blocks=1 00:36:53.022 --rc geninfo_unexecuted_blocks=1 00:36:53.022 00:36:53.022 ' 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:53.022 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:36:53.023 10:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:01.160 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:01.161 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:01.161 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:01.161 Found net devices under 0000:31:00.0: cvl_0_0 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:01.161 Found net devices under 0000:31:00.1: cvl_0_1 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:01.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:01.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:37:01.161 00:37:01.161 --- 10.0.0.2 ping statistics --- 00:37:01.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.161 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:01.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:01.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:37:01.161 00:37:01.161 --- 10.0.0.1 ping statistics --- 00:37:01.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.161 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:01.161 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:01.162 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:01.422 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:37:01.422 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4154147 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4154147 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 4154147 ']' 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:01.423 [2024-11-06 10:29:04.717469] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:01.423 [2024-11-06 10:29:04.718334] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:37:01.423 [2024-11-06 10:29:04.718378] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:01.423 [2024-11-06 10:29:04.795332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:01.423 [2024-11-06 10:29:04.833247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:01.423 [2024-11-06 10:29:04.833281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:01.423 [2024-11-06 10:29:04.833289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:01.423 [2024-11-06 10:29:04.833295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:01.423 [2024-11-06 10:29:04.833301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:01.423 [2024-11-06 10:29:04.834804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.423 [2024-11-06 10:29:04.834939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:01.423 [2024-11-06 10:29:04.835255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:01.423 [2024-11-06 10:29:04.835256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:01.423 [2024-11-06 10:29:04.835636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.423 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:01.684 [2024-11-06 10:29:04.958316] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:01.684 [2024-11-06 10:29:04.959151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:01.684 [2024-11-06 10:29:04.959521] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:01.684 [2024-11-06 10:29:04.959764] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:01.684 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.684 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:01.684 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.684 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:01.684 [2024-11-06 10:29:04.971820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:01.684 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.685 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:01.685 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.685 10:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:01.685 Malloc0 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:01.685 [2024-11-06 10:29:05.036021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4154188 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4154190 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:01.685 { 00:37:01.685 "params": { 00:37:01.685 "name": "Nvme$subsystem", 00:37:01.685 "trtype": "$TEST_TRANSPORT", 00:37:01.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:01.685 "adrfam": "ipv4", 00:37:01.685 "trsvcid": "$NVMF_PORT", 00:37:01.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:01.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:01.685 "hdgst": ${hdgst:-false}, 00:37:01.685 "ddgst": ${ddgst:-false} 00:37:01.685 }, 00:37:01.685 "method": "bdev_nvme_attach_controller" 00:37:01.685 } 00:37:01.685 EOF 00:37:01.685 )") 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4154193 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4154196 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:01.685 { 00:37:01.685 "params": { 00:37:01.685 "name": "Nvme$subsystem", 00:37:01.685 "trtype": "$TEST_TRANSPORT", 00:37:01.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:01.685 "adrfam": "ipv4", 00:37:01.685 "trsvcid": "$NVMF_PORT", 00:37:01.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:01.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:01.685 "hdgst": ${hdgst:-false}, 00:37:01.685 "ddgst": ${ddgst:-false} 00:37:01.685 }, 00:37:01.685 "method": "bdev_nvme_attach_controller" 00:37:01.685 } 00:37:01.685 EOF 00:37:01.685 )") 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:01.685 { 00:37:01.685 "params": { 00:37:01.685 "name": "Nvme$subsystem", 00:37:01.685 "trtype": "$TEST_TRANSPORT", 00:37:01.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:01.685 "adrfam": "ipv4", 00:37:01.685 "trsvcid": "$NVMF_PORT", 00:37:01.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:01.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:01.685 "hdgst": ${hdgst:-false}, 00:37:01.685 "ddgst": ${ddgst:-false} 00:37:01.685 }, 00:37:01.685 "method": "bdev_nvme_attach_controller" 00:37:01.685 } 00:37:01.685 EOF 00:37:01.685 )") 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:01.685 { 00:37:01.685 "params": { 00:37:01.685 "name": "Nvme$subsystem", 00:37:01.685 "trtype": "$TEST_TRANSPORT", 00:37:01.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:01.685 "adrfam": "ipv4", 00:37:01.685 "trsvcid": "$NVMF_PORT", 00:37:01.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:01.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:01.685 "hdgst": ${hdgst:-false}, 00:37:01.685 "ddgst": ${ddgst:-false} 00:37:01.685 }, 00:37:01.685 "method": "bdev_nvme_attach_controller" 00:37:01.685 } 00:37:01.685 EOF 00:37:01.685 )") 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4154188 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:01.685 "params": { 00:37:01.685 "name": "Nvme1", 00:37:01.685 "trtype": "tcp", 00:37:01.685 "traddr": "10.0.0.2", 00:37:01.685 "adrfam": "ipv4", 00:37:01.685 "trsvcid": "4420", 00:37:01.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:01.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:01.685 "hdgst": false, 00:37:01.685 "ddgst": false 00:37:01.685 }, 00:37:01.685 "method": "bdev_nvme_attach_controller" 00:37:01.685 }' 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:01.685 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:01.685 "params": { 00:37:01.685 "name": "Nvme1", 00:37:01.685 "trtype": "tcp", 00:37:01.685 "traddr": "10.0.0.2", 00:37:01.685 "adrfam": "ipv4", 00:37:01.685 "trsvcid": "4420", 00:37:01.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:01.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:01.685 "hdgst": false, 00:37:01.685 "ddgst": false 00:37:01.685 }, 00:37:01.685 "method": "bdev_nvme_attach_controller" 00:37:01.685 }' 00:37:01.686 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:01.686 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:01.686 "params": { 00:37:01.686 "name": "Nvme1", 00:37:01.686 "trtype": "tcp", 00:37:01.686 "traddr": "10.0.0.2", 00:37:01.686 "adrfam": "ipv4", 00:37:01.686 "trsvcid": "4420", 00:37:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:01.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:01.686 "hdgst": false, 00:37:01.686 "ddgst": false 00:37:01.686 }, 00:37:01.686 "method": "bdev_nvme_attach_controller" 00:37:01.686 }' 00:37:01.686 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:01.686 10:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:01.686 "params": { 00:37:01.686 "name": "Nvme1", 00:37:01.686 "trtype": "tcp", 00:37:01.686 "traddr": "10.0.0.2", 00:37:01.686 "adrfam": "ipv4", 00:37:01.686 "trsvcid": "4420", 00:37:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:01.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:01.686 "hdgst": false, 00:37:01.686 "ddgst": false 00:37:01.686 }, 00:37:01.686 "method": "bdev_nvme_attach_controller" 00:37:01.686 }' 00:37:01.686 [2024-11-06 10:29:05.092246] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:37:01.686 [2024-11-06 10:29:05.092302] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:37:01.686 [2024-11-06 10:29:05.092668] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:37:01.686 [2024-11-06 10:29:05.092717] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:37:01.686 [2024-11-06 10:29:05.094058] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:37:01.686 [2024-11-06 10:29:05.094105] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:37:01.686 [2024-11-06 10:29:05.096318] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:37:01.686 [2024-11-06 10:29:05.096363] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:37:01.947 [2024-11-06 10:29:05.264922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.947 [2024-11-06 10:29:05.293472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:01.947 [2024-11-06 10:29:05.321764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.947 [2024-11-06 10:29:05.350585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:01.947 [2024-11-06 10:29:05.382216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.947 [2024-11-06 10:29:05.411881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:01.947 [2024-11-06 10:29:05.427576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.208 [2024-11-06 10:29:05.455801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:02.208 Running I/O for 1 seconds... 00:37:02.208 Running I/O for 1 seconds... 00:37:02.208 Running I/O for 1 seconds... 00:37:02.208 Running I/O for 1 seconds... 00:37:03.151 188312.00 IOPS, 735.59 MiB/s 00:37:03.151 Latency(us) 00:37:03.151 [2024-11-06T09:29:06.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.151 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:37:03.151 Nvme1n1 : 1.00 187940.36 734.14 0.00 0.00 677.02 302.08 1966.08 00:37:03.151 [2024-11-06T09:29:06.652Z] =================================================================================================================== 00:37:03.151 [2024-11-06T09:29:06.652Z] Total : 187940.36 734.14 0.00 0.00 677.02 302.08 1966.08 00:37:03.151 7593.00 IOPS, 29.66 MiB/s 00:37:03.151 Latency(us) 00:37:03.151 [2024-11-06T09:29:06.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.151 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:37:03.151 Nvme1n1 : 1.02 7609.27 29.72 0.00 0.00 16685.48 4341.76 25340.59 00:37:03.151 [2024-11-06T09:29:06.652Z] =================================================================================================================== 00:37:03.151 [2024-11-06T09:29:06.652Z] Total : 7609.27 29.72 0.00 0.00 16685.48 4341.76 25340.59 00:37:03.412 19487.00 IOPS, 76.12 MiB/s 00:37:03.412 Latency(us) 00:37:03.412 [2024-11-06T09:29:06.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.412 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:37:03.412 Nvme1n1 : 1.01 19555.63 76.39 0.00 0.00 6528.86 3017.39 10649.60 00:37:03.412 [2024-11-06T09:29:06.913Z] =================================================================================================================== 00:37:03.412 [2024-11-06T09:29:06.913Z] Total : 19555.63 76.39 0.00 0.00 6528.86 3017.39 10649.60 00:37:03.412 7629.00 IOPS, 29.80 MiB/s 00:37:03.412 Latency(us) 00:37:03.412 [2024-11-06T09:29:06.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.412 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:37:03.412 Nvme1n1 : 1.01 7754.92 30.29 0.00 0.00 16461.94 3741.01 30583.47 00:37:03.412 [2024-11-06T09:29:06.913Z] =================================================================================================================== 00:37:03.412 [2024-11-06T09:29:06.913Z] Total : 7754.92 30.29 0.00 0.00 16461.94 3741.01 30583.47 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4154190 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4154193 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4154196 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:03.412 rmmod nvme_tcp 00:37:03.412 rmmod nvme_fabrics 00:37:03.412 rmmod nvme_keyring 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4154147 ']' 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4154147 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 4154147 ']' 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 4154147 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:03.412 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4154147 00:37:03.674 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:03.674 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:03.674 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4154147' 00:37:03.674 killing process with pid 4154147 00:37:03.674 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 4154147 00:37:03.674 10:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 4154147 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:03.674 10:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:06.219 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:06.219 00:37:06.219 real 0m13.014s 00:37:06.219 user 0m15.086s 00:37:06.219 sys 0m8.040s 00:37:06.219 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:06.219 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:06.219 ************************************ 00:37:06.219 END TEST nvmf_bdev_io_wait 00:37:06.219 ************************************ 00:37:06.219 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:06.219 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:06.219 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:06.219 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:06.219 ************************************ 00:37:06.219 START TEST nvmf_queue_depth 00:37:06.219 ************************************ 00:37:06.219 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:06.219 * Looking for test storage... 00:37:06.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:06.219 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.220 --rc genhtml_branch_coverage=1 00:37:06.220 --rc genhtml_function_coverage=1 00:37:06.220 --rc genhtml_legend=1 00:37:06.220 --rc geninfo_all_blocks=1 00:37:06.220 --rc geninfo_unexecuted_blocks=1 00:37:06.220 00:37:06.220 ' 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.220 --rc genhtml_branch_coverage=1 00:37:06.220 --rc genhtml_function_coverage=1 00:37:06.220 --rc genhtml_legend=1 00:37:06.220 --rc geninfo_all_blocks=1 00:37:06.220 --rc geninfo_unexecuted_blocks=1 00:37:06.220 00:37:06.220 ' 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.220 --rc genhtml_branch_coverage=1 00:37:06.220 --rc genhtml_function_coverage=1 00:37:06.220 --rc genhtml_legend=1 00:37:06.220 --rc geninfo_all_blocks=1 00:37:06.220 --rc geninfo_unexecuted_blocks=1 00:37:06.220 00:37:06.220 ' 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.220 --rc genhtml_branch_coverage=1 00:37:06.220 --rc genhtml_function_coverage=1 00:37:06.220 --rc genhtml_legend=1 00:37:06.220 --rc geninfo_all_blocks=1 00:37:06.220 --rc geninfo_unexecuted_blocks=1 00:37:06.220 00:37:06.220 ' 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:06.220 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:37:06.221 10:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.368 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:14.369 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:14.369 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:14.369 Found net devices under 0000:31:00.0: cvl_0_0 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:14.369 Found net devices under 0000:31:00.1: cvl_0_1 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:37:14.369 00:37:14.369 --- 10.0.0.2 ping statistics --- 00:37:14.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.369 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:37:14.369 00:37:14.369 --- 10.0.0.1 ping statistics --- 00:37:14.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.369 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:14.369 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4159246 00:37:14.370 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4159246 00:37:14.370 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:37:14.370 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 4159246 ']' 00:37:14.370 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.370 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:14.370 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.370 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:14.370 10:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:14.631 [2024-11-06 10:29:17.912568] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:14.631 [2024-11-06 10:29:17.913740] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:37:14.631 [2024-11-06 10:29:17.913802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.631 [2024-11-06 10:29:18.026218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.631 [2024-11-06 10:29:18.076091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.631 [2024-11-06 10:29:18.076143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.631 [2024-11-06 10:29:18.076151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.631 [2024-11-06 10:29:18.076158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.631 [2024-11-06 10:29:18.076166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.631 [2024-11-06 10:29:18.076973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.892 [2024-11-06 10:29:18.151653] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:14.892 [2024-11-06 10:29:18.151951] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:15.465 [2024-11-06 10:29:18.761806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:15.465 Malloc0 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:15.465 [2024-11-06 10:29:18.837943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4159548 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4159548 /var/tmp/bdevperf.sock 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 4159548 ']' 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:15.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:15.465 10:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:15.465 [2024-11-06 10:29:18.894414] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:37:15.465 [2024-11-06 10:29:18.894466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159548 ] 00:37:15.726 [2024-11-06 10:29:18.974050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.726 [2024-11-06 10:29:19.013497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.297 10:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:16.297 10:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:37:16.298 10:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:16.298 10:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.298 10:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:16.558 NVMe0n1 00:37:16.558 10:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.558 10:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:16.558 Running I/O for 10 seconds... 00:37:18.882 8453.00 IOPS, 33.02 MiB/s [2024-11-06T09:29:23.324Z] 8815.00 IOPS, 34.43 MiB/s [2024-11-06T09:29:24.266Z] 9199.00 IOPS, 35.93 MiB/s [2024-11-06T09:29:25.209Z] 9886.00 IOPS, 38.62 MiB/s [2024-11-06T09:29:26.150Z] 10316.00 IOPS, 40.30 MiB/s [2024-11-06T09:29:27.092Z] 10653.33 IOPS, 41.61 MiB/s [2024-11-06T09:29:28.476Z] 10848.43 IOPS, 42.38 MiB/s [2024-11-06T09:29:29.419Z] 11035.00 IOPS, 43.11 MiB/s [2024-11-06T09:29:30.360Z] 11154.00 IOPS, 43.57 MiB/s [2024-11-06T09:29:30.360Z] 11267.30 IOPS, 44.01 MiB/s 00:37:26.859 Latency(us) 00:37:26.859 [2024-11-06T09:29:30.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.859 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:37:26.859 Verification LBA range: start 0x0 length 0x4000 00:37:26.859 NVMe0n1 : 10.06 11289.51 44.10 0.00 0.00 90381.31 23046.83 76895.57 00:37:26.859 [2024-11-06T09:29:30.360Z] =================================================================================================================== 00:37:26.859 [2024-11-06T09:29:30.360Z] Total : 11289.51 44.10 0.00 0.00 90381.31 23046.83 76895.57 00:37:26.859 { 00:37:26.859 "results": [ 00:37:26.859 { 00:37:26.859 "job": "NVMe0n1", 00:37:26.859 "core_mask": "0x1", 00:37:26.859 "workload": "verify", 00:37:26.859 "status": "finished", 00:37:26.859 "verify_range": { 00:37:26.859 "start": 0, 00:37:26.859 "length": 16384 00:37:26.859 }, 00:37:26.859 "queue_depth": 1024, 00:37:26.859 "io_size": 4096, 00:37:26.859 "runtime": 10.058892, 00:37:26.859 "iops": 11289.513795356388, 00:37:26.859 "mibps": 44.09966326311089, 00:37:26.859 "io_failed": 0, 00:37:26.859 "io_timeout": 0, 00:37:26.859 "avg_latency_us": 90381.31404391218, 00:37:26.859 "min_latency_us": 23046.826666666668, 00:37:26.859 "max_latency_us": 76895.57333333333 00:37:26.859 } 00:37:26.859 ], 00:37:26.859 "core_count": 1 00:37:26.859 } 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4159548 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 4159548 ']' 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 4159548 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4159548 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4159548' 00:37:26.859 killing process with pid 4159548 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 4159548 00:37:26.859 Received shutdown signal, test time was about 10.000000 seconds 00:37:26.859 00:37:26.859 Latency(us) 00:37:26.859 [2024-11-06T09:29:30.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.859 [2024-11-06T09:29:30.360Z] =================================================================================================================== 00:37:26.859 [2024-11-06T09:29:30.360Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 4159548 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:26.859 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:26.859 rmmod nvme_tcp 00:37:27.120 rmmod nvme_fabrics 00:37:27.120 rmmod nvme_keyring 00:37:27.120 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:27.120 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:37:27.120 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:37:27.120 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4159246 ']' 00:37:27.120 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4159246 00:37:27.120 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 4159246 ']' 00:37:27.120 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 4159246 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4159246 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4159246' 00:37:27.121 killing process with pid 4159246 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 4159246 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 4159246 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:27.121 10:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:29.668 00:37:29.668 real 0m23.463s 00:37:29.668 user 0m25.107s 00:37:29.668 sys 0m8.089s 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:29.668 ************************************ 00:37:29.668 END TEST nvmf_queue_depth 00:37:29.668 ************************************ 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:29.668 ************************************ 00:37:29.668 START TEST nvmf_target_multipath 00:37:29.668 ************************************ 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:29.668 * Looking for test storage... 00:37:29.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:29.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.668 --rc genhtml_branch_coverage=1 00:37:29.668 --rc genhtml_function_coverage=1 00:37:29.668 --rc genhtml_legend=1 00:37:29.668 --rc geninfo_all_blocks=1 00:37:29.668 --rc geninfo_unexecuted_blocks=1 00:37:29.668 00:37:29.668 ' 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:29.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.668 --rc genhtml_branch_coverage=1 00:37:29.668 --rc genhtml_function_coverage=1 00:37:29.668 --rc genhtml_legend=1 00:37:29.668 --rc geninfo_all_blocks=1 00:37:29.668 --rc geninfo_unexecuted_blocks=1 00:37:29.668 00:37:29.668 ' 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:29.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.668 --rc genhtml_branch_coverage=1 00:37:29.668 --rc genhtml_function_coverage=1 00:37:29.668 --rc genhtml_legend=1 00:37:29.668 --rc geninfo_all_blocks=1 00:37:29.668 --rc geninfo_unexecuted_blocks=1 00:37:29.668 00:37:29.668 ' 00:37:29.668 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:29.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.668 --rc genhtml_branch_coverage=1 00:37:29.668 --rc genhtml_function_coverage=1 00:37:29.668 --rc genhtml_legend=1 00:37:29.668 --rc geninfo_all_blocks=1 00:37:29.668 --rc geninfo_unexecuted_blocks=1 00:37:29.668 00:37:29.668 ' 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:29.669 10:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:37:29.669 10:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:37.815 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:37.815 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:37.815 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:37.816 Found net devices under 0000:31:00.0: cvl_0_0 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:37.816 Found net devices under 0000:31:00.1: cvl_0_1 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:37.816 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:38.077 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:38.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:38.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:37:38.077 00:37:38.077 --- 10.0.0.2 ping statistics --- 00:37:38.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.077 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:37:38.077 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:38.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:38.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:37:38.077 00:37:38.077 --- 10.0.0.1 ping statistics --- 00:37:38.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.077 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:37:38.078 only one NIC for nvmf test 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:38.078 rmmod nvme_tcp 00:37:38.078 rmmod nvme_fabrics 00:37:38.078 rmmod nvme_keyring 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:38.078 10:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:40.622 00:37:40.622 real 0m10.832s 00:37:40.622 user 0m2.464s 00:37:40.622 sys 0m6.284s 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:40.622 ************************************ 00:37:40.622 END TEST nvmf_target_multipath 00:37:40.622 ************************************ 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:40.622 ************************************ 00:37:40.622 START TEST nvmf_zcopy 00:37:40.622 ************************************ 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:40.622 * Looking for test storage... 00:37:40.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:40.622 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:40.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.622 --rc genhtml_branch_coverage=1 00:37:40.622 --rc genhtml_function_coverage=1 00:37:40.623 --rc genhtml_legend=1 00:37:40.623 --rc geninfo_all_blocks=1 00:37:40.623 --rc geninfo_unexecuted_blocks=1 00:37:40.623 00:37:40.623 ' 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:40.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.623 --rc genhtml_branch_coverage=1 00:37:40.623 --rc genhtml_function_coverage=1 00:37:40.623 --rc genhtml_legend=1 00:37:40.623 --rc geninfo_all_blocks=1 00:37:40.623 --rc geninfo_unexecuted_blocks=1 00:37:40.623 00:37:40.623 ' 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:40.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.623 --rc genhtml_branch_coverage=1 00:37:40.623 --rc genhtml_function_coverage=1 00:37:40.623 --rc genhtml_legend=1 00:37:40.623 --rc geninfo_all_blocks=1 00:37:40.623 --rc geninfo_unexecuted_blocks=1 00:37:40.623 00:37:40.623 ' 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:40.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.623 --rc genhtml_branch_coverage=1 00:37:40.623 --rc genhtml_function_coverage=1 00:37:40.623 --rc genhtml_legend=1 00:37:40.623 --rc geninfo_all_blocks=1 00:37:40.623 --rc geninfo_unexecuted_blocks=1 00:37:40.623 00:37:40.623 ' 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:37:40.623 10:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:48.768 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:48.768 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:48.768 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:48.769 Found net devices under 0000:31:00.0: cvl_0_0 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:48.769 Found net devices under 0000:31:00.1: cvl_0_1 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:48.769 10:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:48.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:48.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:37:48.769 00:37:48.769 --- 10.0.0.2 ping statistics --- 00:37:48.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:48.769 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:48.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:48.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:37:48.769 00:37:48.769 --- 10.0.0.1 ping statistics --- 00:37:48.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:48.769 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4170960 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4170960 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 4170960 ']' 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:48.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:48.769 10:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:49.031 [2024-11-06 10:29:52.309775] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:49.031 [2024-11-06 10:29:52.310938] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:37:49.031 [2024-11-06 10:29:52.310993] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:49.031 [2024-11-06 10:29:52.416269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:49.031 [2024-11-06 10:29:52.468521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:49.032 [2024-11-06 10:29:52.468578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:49.032 [2024-11-06 10:29:52.468587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:49.032 [2024-11-06 10:29:52.468593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:49.032 [2024-11-06 10:29:52.468600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:49.032 [2024-11-06 10:29:52.469425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:49.293 [2024-11-06 10:29:52.544884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:49.293 [2024-11-06 10:29:52.545166] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:49.865 [2024-11-06 10:29:53.158268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:49.865 [2024-11-06 10:29:53.186500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:49.865 malloc0 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:49.865 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:49.866 { 00:37:49.866 "params": { 00:37:49.866 "name": "Nvme$subsystem", 00:37:49.866 "trtype": "$TEST_TRANSPORT", 00:37:49.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:49.866 "adrfam": "ipv4", 00:37:49.866 "trsvcid": "$NVMF_PORT", 00:37:49.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:49.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:49.866 "hdgst": ${hdgst:-false}, 00:37:49.866 "ddgst": ${ddgst:-false} 00:37:49.866 }, 00:37:49.866 "method": "bdev_nvme_attach_controller" 00:37:49.866 } 00:37:49.866 EOF 00:37:49.866 )") 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:37:49.866 10:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:49.866 "params": { 00:37:49.866 "name": "Nvme1", 00:37:49.866 "trtype": "tcp", 00:37:49.866 "traddr": "10.0.0.2", 00:37:49.866 "adrfam": "ipv4", 00:37:49.866 "trsvcid": "4420", 00:37:49.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:49.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:49.866 "hdgst": false, 00:37:49.866 "ddgst": false 00:37:49.866 }, 00:37:49.866 "method": "bdev_nvme_attach_controller" 00:37:49.866 }' 00:37:49.866 [2024-11-06 10:29:53.287757] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:37:49.866 [2024-11-06 10:29:53.287818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171286 ] 00:37:50.126 [2024-11-06 10:29:53.369688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.126 [2024-11-06 10:29:53.411413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.126 Running I/O for 10 seconds... 00:37:52.454 6621.00 IOPS, 51.73 MiB/s [2024-11-06T09:29:56.897Z] 6661.50 IOPS, 52.04 MiB/s [2024-11-06T09:29:57.838Z] 6672.33 IOPS, 52.13 MiB/s [2024-11-06T09:29:58.778Z] 6682.00 IOPS, 52.20 MiB/s [2024-11-06T09:29:59.720Z] 6686.00 IOPS, 52.23 MiB/s [2024-11-06T09:30:00.662Z] 6690.00 IOPS, 52.27 MiB/s [2024-11-06T09:30:01.603Z] 7084.71 IOPS, 55.35 MiB/s [2024-11-06T09:30:02.986Z] 7411.12 IOPS, 57.90 MiB/s [2024-11-06T09:30:03.928Z] 7666.11 IOPS, 59.89 MiB/s [2024-11-06T09:30:03.928Z] 7870.30 IOPS, 61.49 MiB/s 00:38:00.427 Latency(us) 00:38:00.427 [2024-11-06T09:30:03.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:00.427 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:00.427 Verification LBA range: start 0x0 length 0x1000 00:38:00.427 Nvme1n1 : 10.01 7871.53 61.50 0.00 0.00 16207.55 2225.49 26105.17 00:38:00.427 [2024-11-06T09:30:03.928Z] =================================================================================================================== 00:38:00.427 [2024-11-06T09:30:03.928Z] Total : 7871.53 61.50 0.00 0.00 16207.55 2225.49 26105.17 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4173393 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:00.427 { 00:38:00.427 "params": { 00:38:00.427 "name": "Nvme$subsystem", 00:38:00.427 "trtype": "$TEST_TRANSPORT", 00:38:00.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:00.427 "adrfam": "ipv4", 00:38:00.427 "trsvcid": "$NVMF_PORT", 00:38:00.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:00.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:00.427 "hdgst": ${hdgst:-false}, 00:38:00.427 "ddgst": ${ddgst:-false} 00:38:00.427 }, 00:38:00.427 "method": "bdev_nvme_attach_controller" 00:38:00.427 } 00:38:00.427 EOF 00:38:00.427 )") 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:00.427 [2024-11-06 10:30:03.709835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.427 [2024-11-06 10:30:03.709869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:00.427 10:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:00.427 "params": { 00:38:00.427 "name": "Nvme1", 00:38:00.427 "trtype": "tcp", 00:38:00.427 "traddr": "10.0.0.2", 00:38:00.427 "adrfam": "ipv4", 00:38:00.427 "trsvcid": "4420", 00:38:00.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:00.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:00.427 "hdgst": false, 00:38:00.427 "ddgst": false 00:38:00.427 }, 00:38:00.427 "method": "bdev_nvme_attach_controller" 00:38:00.427 }' 00:38:00.427 [2024-11-06 10:30:03.721804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.427 [2024-11-06 10:30:03.721814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.427 [2024-11-06 10:30:03.733802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.427 [2024-11-06 10:30:03.733809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.427 [2024-11-06 10:30:03.745802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.427 [2024-11-06 10:30:03.745811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.427 [2024-11-06 10:30:03.749370] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:00.427 [2024-11-06 10:30:03.749418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173393 ] 00:38:00.427 [2024-11-06 10:30:03.757803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.427 [2024-11-06 10:30:03.757812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.427 [2024-11-06 10:30:03.769801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.427 [2024-11-06 10:30:03.769810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.427 [2024-11-06 10:30:03.781802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.427 [2024-11-06 10:30:03.781809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.427 [2024-11-06 10:30:03.793801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.793809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.428 [2024-11-06 10:30:03.805802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.805810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.428 [2024-11-06 10:30:03.817801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.817809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.428 [2024-11-06 10:30:03.828495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.428 [2024-11-06 10:30:03.829801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.829809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.428 [2024-11-06 10:30:03.841802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.841811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.428 [2024-11-06 10:30:03.853802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.853811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.428 [2024-11-06 10:30:03.863911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.428 [2024-11-06 10:30:03.865802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.865811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.428 [2024-11-06 10:30:03.877806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.877816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.428 [2024-11-06 10:30:03.889804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.889817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.428 [2024-11-06 10:30:03.901803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.901814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.428 [2024-11-06 10:30:03.913803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.913813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.428 [2024-11-06 10:30:03.925803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.428 [2024-11-06 10:30:03.925811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.688 [2024-11-06 10:30:03.937811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:03.937829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:03.949805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:03.949816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:03.961804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:03.961816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:03.973808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:03.973822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:03.985803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:03.985814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:04.030080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.030095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:04.041942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.041954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 Running I/O for 5 seconds... 00:38:00.689 [2024-11-06 10:30:04.057709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.057726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:04.071098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.071115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:04.084949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.084966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:04.098182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.098197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:04.113112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.113127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:04.126322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.126337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:04.140805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.140821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:04.153617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.153633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:04.166851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.166869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.689 [2024-11-06 10:30:04.181101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.689 [2024-11-06 10:30:04.181115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.194448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.194463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.209152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.209167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.222144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.222159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.237097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.237113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.250268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.250283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.265114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.265130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.278427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.278443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.293018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.293033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.305937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.305953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.318709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.318723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.332844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.332859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.345925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.345940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.358911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.949 [2024-11-06 10:30:04.358926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.949 [2024-11-06 10:30:04.373045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.950 [2024-11-06 10:30:04.373060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.950 [2024-11-06 10:30:04.386222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.950 [2024-11-06 10:30:04.386238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.950 [2024-11-06 10:30:04.400917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.950 [2024-11-06 10:30:04.400932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.950 [2024-11-06 10:30:04.413626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.950 [2024-11-06 10:30:04.413641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.950 [2024-11-06 10:30:04.426396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.950 [2024-11-06 10:30:04.426411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:00.950 [2024-11-06 10:30:04.440784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:00.950 [2024-11-06 10:30:04.440807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.210 [2024-11-06 10:30:04.453683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.210 [2024-11-06 10:30:04.453699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.210 [2024-11-06 10:30:04.467032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.210 [2024-11-06 10:30:04.467047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.210 [2024-11-06 10:30:04.481157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.210 [2024-11-06 10:30:04.481172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.210 [2024-11-06 10:30:04.494367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.210 [2024-11-06 10:30:04.494382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.508921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.508937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.522248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.522263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.537051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.537067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.549600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.549615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.562032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.562047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.574818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.574833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.588911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.588926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.601565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.601580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.614095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.614109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.628591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.628606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.641482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.641497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.654349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.654363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.669067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.669082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.682024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.682039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.694917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.694936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.211 [2024-11-06 10:30:04.709128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.211 [2024-11-06 10:30:04.709144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.471 [2024-11-06 10:30:04.722111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.471 [2024-11-06 10:30:04.722126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.471 [2024-11-06 10:30:04.737087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.471 [2024-11-06 10:30:04.737103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.471 [2024-11-06 10:30:04.750382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.471 [2024-11-06 10:30:04.750397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.471 [2024-11-06 10:30:04.765016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.471 [2024-11-06 10:30:04.765031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.471 [2024-11-06 10:30:04.777960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.471 [2024-11-06 10:30:04.777976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.471 [2024-11-06 10:30:04.790860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.471 [2024-11-06 10:30:04.790880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.471 [2024-11-06 10:30:04.804874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.471 [2024-11-06 10:30:04.804890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.471 [2024-11-06 10:30:04.818030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.471 [2024-11-06 10:30:04.818046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.471 [2024-11-06 10:30:04.830932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.471 [2024-11-06 10:30:04.830947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.471 [2024-11-06 10:30:04.845105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.471 [2024-11-06 10:30:04.845120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.472 [2024-11-06 10:30:04.857720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.472 [2024-11-06 10:30:04.857735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.472 [2024-11-06 10:30:04.870329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.472 [2024-11-06 10:30:04.870344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.472 [2024-11-06 10:30:04.885124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.472 [2024-11-06 10:30:04.885140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.472 [2024-11-06 10:30:04.898072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.472 [2024-11-06 10:30:04.898088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.472 [2024-11-06 10:30:04.910782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.472 [2024-11-06 10:30:04.910797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.472 [2024-11-06 10:30:04.924810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.472 [2024-11-06 10:30:04.924825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.472 [2024-11-06 10:30:04.937820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.472 [2024-11-06 10:30:04.937835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.472 [2024-11-06 10:30:04.950871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.472 [2024-11-06 10:30:04.950890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.472 [2024-11-06 10:30:04.964652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.472 [2024-11-06 10:30:04.964669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.732 [2024-11-06 10:30:04.977748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.732 [2024-11-06 10:30:04.977765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.732 [2024-11-06 10:30:04.990876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.732 [2024-11-06 10:30:04.990891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.732 [2024-11-06 10:30:05.005611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.732 [2024-11-06 10:30:05.005627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.018350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.018365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.033076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.033091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.045690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.045705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 19009.00 IOPS, 148.51 MiB/s [2024-11-06T09:30:05.234Z] [2024-11-06 10:30:05.058607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.058622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.073119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.073134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.086410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.086426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.101081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.101097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.114181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.114196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.128963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.128978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.142180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.142194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.156924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.156940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.169391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.169407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.182817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.182832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.197067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.197082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.210476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.210491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.733 [2024-11-06 10:30:05.225151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.733 [2024-11-06 10:30:05.225166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.237954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.237970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.250800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.250815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.264804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.264820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.277630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.277645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.290340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.290355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.304936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.304951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.318013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.318028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.330501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.330516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.343478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.343493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.357297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.357312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.370351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.370366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.384761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.384776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.397870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.397886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.410328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.410343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.423055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.423071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.437307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.437323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.450638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.450653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.465237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.465252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.994 [2024-11-06 10:30:05.478505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.994 [2024-11-06 10:30:05.478519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:01.995 [2024-11-06 10:30:05.493175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:01.995 [2024-11-06 10:30:05.493190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.506312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.506327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.520954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.520970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.533716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.533731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.546611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.546625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.560669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.560684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.573998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.574013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.586872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.586887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.601270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.601285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.614171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.614186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.628526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.628541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.641721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.641736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.654422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.654436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.669261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.669276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.682256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.682271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.696916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.696931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.709820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.709835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.723116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.723131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.737660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.737675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.256 [2024-11-06 10:30:05.750851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.256 [2024-11-06 10:30:05.750871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.517 [2024-11-06 10:30:05.765197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.517 [2024-11-06 10:30:05.765213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.517 [2024-11-06 10:30:05.778582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.517 [2024-11-06 10:30:05.778597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.517 [2024-11-06 10:30:05.793186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.517 [2024-11-06 10:30:05.793201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.517 [2024-11-06 10:30:05.806242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.517 [2024-11-06 10:30:05.806257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.517 [2024-11-06 10:30:05.820501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.517 [2024-11-06 10:30:05.820515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.517 [2024-11-06 10:30:05.833400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.517 [2024-11-06 10:30:05.833415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.517 [2024-11-06 10:30:05.846807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.517 [2024-11-06 10:30:05.846821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.517 [2024-11-06 10:30:05.860995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.517 [2024-11-06 10:30:05.861010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.517 [2024-11-06 10:30:05.873972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.518 [2024-11-06 10:30:05.873994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.518 [2024-11-06 10:30:05.887286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.518 [2024-11-06 10:30:05.887301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.518 [2024-11-06 10:30:05.901045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.518 [2024-11-06 10:30:05.901060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.518 [2024-11-06 10:30:05.914315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.518 [2024-11-06 10:30:05.914329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.518 [2024-11-06 10:30:05.929264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.518 [2024-11-06 10:30:05.929279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.518 [2024-11-06 10:30:05.942374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.518 [2024-11-06 10:30:05.942389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.518 [2024-11-06 10:30:05.956492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.518 [2024-11-06 10:30:05.956507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.518 [2024-11-06 10:30:05.969460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.518 [2024-11-06 10:30:05.969480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.518 [2024-11-06 10:30:05.982328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.518 [2024-11-06 10:30:05.982343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.518 [2024-11-06 10:30:05.997105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.518 [2024-11-06 10:30:05.997121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.518 [2024-11-06 10:30:06.010121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.518 [2024-11-06 10:30:06.010135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.024636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.024651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.037997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.038012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 19044.00 IOPS, 148.78 MiB/s [2024-11-06T09:30:06.280Z] [2024-11-06 10:30:06.050642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.050659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.065145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.065160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.078178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.078192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.090963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.090978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.102922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.102936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.116954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.116970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.130301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.130316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.144666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.144681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.157881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.157897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.171201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.171215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.185379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.185394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.198744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.198759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.212801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.212816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.225964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.225983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.238806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.238821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.253175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.253191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:02.779 [2024-11-06 10:30:06.266203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:02.779 [2024-11-06 10:30:06.266217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.280540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.280555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.293705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.293720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.307147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.307162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.320999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.321014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.333956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.333971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.346878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.346893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.360969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.360984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.374114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.374129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.389498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.389514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.402333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.402348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.416941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.416956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.429721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.429737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.443261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.443276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.457171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.457186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.470135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.470151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.484894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.484914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.497677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.497693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.510563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.510578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.525124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.525140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.041 [2024-11-06 10:30:06.537885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.041 [2024-11-06 10:30:06.537901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.551237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.551253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.565413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.565428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.578133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.578147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.593070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.593085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.606469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.606484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.621369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.621385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.634230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.634245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.648800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.648816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.661695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.661710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.674705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.674720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.690001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.690017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.702790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.702806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.717374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.717389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.730810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.730826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.744884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.744903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.757755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.757770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.771233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.771248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.785079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.785095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.302 [2024-11-06 10:30:06.798307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.302 [2024-11-06 10:30:06.798322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.564 [2024-11-06 10:30:06.812891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.564 [2024-11-06 10:30:06.812907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.564 [2024-11-06 10:30:06.826184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.564 [2024-11-06 10:30:06.826199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.564 [2024-11-06 10:30:06.840963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.564 [2024-11-06 10:30:06.840978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.564 [2024-11-06 10:30:06.854004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.564 [2024-11-06 10:30:06.854019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.564 [2024-11-06 10:30:06.866741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.564 [2024-11-06 10:30:06.866756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.564 [2024-11-06 10:30:06.881244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:06.881260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:06.894440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:06.894455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:06.909308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:06.909323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:06.922509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:06.922525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:06.936955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:06.936971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:06.950208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:06.950222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:06.964982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:06.964997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:06.978050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:06.978065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:06.991069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:06.991084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:07.005084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:07.005100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:07.018218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:07.018233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:07.032776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:07.032791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 [2024-11-06 10:30:07.045777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:07.045793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.565 19044.33 IOPS, 148.78 MiB/s [2024-11-06T09:30:07.066Z] [2024-11-06 10:30:07.058484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.565 [2024-11-06 10:30:07.058499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.073141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.073157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.086211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.086227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.100875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.100890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.114072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.114087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.126619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.126633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.141085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.141101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.154248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.154263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.169353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.169368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.182442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.182456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.197308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.197324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.209873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.209888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.222616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.222630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.237492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.237508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.250532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.250548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.265463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.265478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.278538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.278553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.292722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.292738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.305820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.305835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:03.826 [2024-11-06 10:30:07.318642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:03.826 [2024-11-06 10:30:07.318657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.087 [2024-11-06 10:30:07.332953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.087 [2024-11-06 10:30:07.332969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.087 [2024-11-06 10:30:07.346151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.087 [2024-11-06 10:30:07.346165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.087 [2024-11-06 10:30:07.360851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.087 [2024-11-06 10:30:07.360871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.087 [2024-11-06 10:30:07.373782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.087 [2024-11-06 10:30:07.373797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.087 [2024-11-06 10:30:07.386783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.087 [2024-11-06 10:30:07.386799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.087 [2024-11-06 10:30:07.401405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.087 [2024-11-06 10:30:07.401420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.414520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.414535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.429346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.429361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.442486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.442500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.457329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.457345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.470473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.470488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.485221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.485239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.498484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.498499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.512950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.512970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.526532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.526546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.540910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.540925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.553803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.553818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.566573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.566588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.088 [2024-11-06 10:30:07.581016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.088 [2024-11-06 10:30:07.581030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.594045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.349 [2024-11-06 10:30:07.594060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.607024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.349 [2024-11-06 10:30:07.607039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.621323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.349 [2024-11-06 10:30:07.621338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.634588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.349 [2024-11-06 10:30:07.634602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.648699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.349 [2024-11-06 10:30:07.648714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.661811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.349 [2024-11-06 10:30:07.661826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.675114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.349 [2024-11-06 10:30:07.675128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.689300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.349 [2024-11-06 10:30:07.689315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.702265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.349 [2024-11-06 10:30:07.702280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.717065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.349 [2024-11-06 10:30:07.717080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.730358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.349 [2024-11-06 10:30:07.730373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.349 [2024-11-06 10:30:07.745186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.350 [2024-11-06 10:30:07.745201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.350 [2024-11-06 10:30:07.758202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.350 [2024-11-06 10:30:07.758217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.350 [2024-11-06 10:30:07.772803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.350 [2024-11-06 10:30:07.772822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.350 [2024-11-06 10:30:07.785846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.350 [2024-11-06 10:30:07.785864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.350 [2024-11-06 10:30:07.798385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.350 [2024-11-06 10:30:07.798400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.350 [2024-11-06 10:30:07.812989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.350 [2024-11-06 10:30:07.813004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.350 [2024-11-06 10:30:07.826127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.350 [2024-11-06 10:30:07.826142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.350 [2024-11-06 10:30:07.840838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.350 [2024-11-06 10:30:07.840853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:07.853815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:07.853831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:07.867087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:07.867102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:07.881189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:07.881204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:07.894208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:07.894222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:07.909068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:07.909083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:07.922021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:07.922036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:07.934892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:07.934907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:07.949407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:07.949423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:07.962193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:07.962207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:07.976917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:07.976932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:07.989810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:07.989824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:08.002839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:08.002854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:08.017269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.610 [2024-11-06 10:30:08.017284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.610 [2024-11-06 10:30:08.030456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.611 [2024-11-06 10:30:08.030475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.611 [2024-11-06 10:30:08.045328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.611 [2024-11-06 10:30:08.045343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.611 19030.50 IOPS, 148.68 MiB/s [2024-11-06T09:30:08.112Z] [2024-11-06 10:30:08.058142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.611 [2024-11-06 10:30:08.058156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.611 [2024-11-06 10:30:08.072756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.611 [2024-11-06 10:30:08.072771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.611 [2024-11-06 10:30:08.085857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.611 [2024-11-06 10:30:08.085875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.611 [2024-11-06 10:30:08.098745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.611 [2024-11-06 10:30:08.098760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.112890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.112905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.125817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.125832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.138865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.138880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.152959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.152974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.166150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.166165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.180834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.180849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.193921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.193935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.206542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.206556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.221048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.221063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.233964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.233980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.246627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.246643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.260910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.260926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.273821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.273837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.286931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.286947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.301322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.301337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.314484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.314500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.328852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.328873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.342060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.342075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.354952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.354967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:04.872 [2024-11-06 10:30:08.368924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:04.872 [2024-11-06 10:30:08.368940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.381950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.381966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.395141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.395156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.408768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.408783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.421712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.421727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.434750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.434766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.449405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.449421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.462536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.462551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.477101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.477117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.489967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.489983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.502718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.502733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.516823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.516838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.529791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.529807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.543297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.543312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.557463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.557478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.570154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.570170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.585109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.585124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.598287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.598302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.612916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.612932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.134 [2024-11-06 10:30:08.625660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.134 [2024-11-06 10:30:08.625676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.638961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.638977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.653095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.653110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.666171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.666186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.681396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.681412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.694791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.694806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.709134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.709150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.722440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.722455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.737017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.737032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.750135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.750150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.764963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.764978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.778063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.778080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.790910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.790925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.805011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.805027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.818109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.818124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.832757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.832773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.845662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.845678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.859071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.859087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.396 [2024-11-06 10:30:08.873439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.396 [2024-11-06 10:30:08.873455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.397 [2024-11-06 10:30:08.886597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.397 [2024-11-06 10:30:08.886613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:08.901379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:08.901395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:08.914469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:08.914484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:08.929289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:08.929304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:08.942343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:08.942358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:08.956731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:08.956746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:08.969598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:08.969614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:08.982296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:08.982310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:08.996760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:08.996776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:09.010263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.010279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:09.024746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.024762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:09.037806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.037821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:09.051006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.051022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 19036.00 IOPS, 148.72 MiB/s [2024-11-06T09:30:09.159Z] [2024-11-06 10:30:09.062362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.062378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 00:38:05.658 Latency(us) 00:38:05.658 [2024-11-06T09:30:09.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.658 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:38:05.658 Nvme1n1 : 5.01 19038.73 148.74 0.00 0.00 6715.75 2553.17 11468.80 00:38:05.658 [2024-11-06T09:30:09.159Z] =================================================================================================================== 00:38:05.658 [2024-11-06T09:30:09.159Z] Total : 19038.73 148.74 0.00 0.00 6715.75 2553.17 11468.80 00:38:05.658 [2024-11-06 10:30:09.073806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.073820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:09.085812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.085828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:09.097806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.097819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:09.109808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.109822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:09.121805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.121814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:09.133802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.133812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:09.145802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.145810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.658 [2024-11-06 10:30:09.157807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.658 [2024-11-06 10:30:09.157818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.919 [2024-11-06 10:30:09.169803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.919 [2024-11-06 10:30:09.169813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.919 [2024-11-06 10:30:09.181802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:05.919 [2024-11-06 10:30:09.181811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:05.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4173393) - No such process 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4173393 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:05.919 delay0 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.919 10:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:38:05.919 [2024-11-06 10:30:09.370043] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:12.688 Initializing NVMe Controllers 00:38:12.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:12.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:12.688 Initialization complete. Launching workers. 00:38:12.688 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4096 00:38:12.688 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4367, failed to submit 49 00:38:12.688 success 4226, unsuccessful 141, failed 0 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:12.688 rmmod nvme_tcp 00:38:12.688 rmmod nvme_fabrics 00:38:12.688 rmmod nvme_keyring 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4170960 ']' 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4170960 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 4170960 ']' 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 4170960 00:38:12.688 10:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:38:12.688 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:12.688 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4170960 00:38:12.688 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:12.688 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:12.688 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4170960' 00:38:12.688 killing process with pid 4170960 00:38:12.688 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 4170960 00:38:12.688 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 4170960 00:38:12.688 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:12.688 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:12.688 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:12.688 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:38:12.971 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:38:12.971 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:12.971 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:38:12.971 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:12.971 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:12.971 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.971 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:12.971 10:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:14.882 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:14.882 00:38:14.882 real 0m34.577s 00:38:14.882 user 0m43.541s 00:38:14.882 sys 0m12.414s 00:38:14.882 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:14.882 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:14.882 ************************************ 00:38:14.882 END TEST nvmf_zcopy 00:38:14.882 ************************************ 00:38:14.882 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:14.882 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:14.882 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:14.882 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:14.882 ************************************ 00:38:14.882 START TEST nvmf_nmic 00:38:14.882 ************************************ 00:38:14.882 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:15.143 * Looking for test storage... 00:38:15.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:15.143 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:15.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.144 --rc genhtml_branch_coverage=1 00:38:15.144 --rc genhtml_function_coverage=1 00:38:15.144 --rc genhtml_legend=1 00:38:15.144 --rc geninfo_all_blocks=1 00:38:15.144 --rc geninfo_unexecuted_blocks=1 00:38:15.144 00:38:15.144 ' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:15.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.144 --rc genhtml_branch_coverage=1 00:38:15.144 --rc genhtml_function_coverage=1 00:38:15.144 --rc genhtml_legend=1 00:38:15.144 --rc geninfo_all_blocks=1 00:38:15.144 --rc geninfo_unexecuted_blocks=1 00:38:15.144 00:38:15.144 ' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:15.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.144 --rc genhtml_branch_coverage=1 00:38:15.144 --rc genhtml_function_coverage=1 00:38:15.144 --rc genhtml_legend=1 00:38:15.144 --rc geninfo_all_blocks=1 00:38:15.144 --rc geninfo_unexecuted_blocks=1 00:38:15.144 00:38:15.144 ' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:15.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.144 --rc genhtml_branch_coverage=1 00:38:15.144 --rc genhtml_function_coverage=1 00:38:15.144 --rc genhtml_legend=1 00:38:15.144 --rc geninfo_all_blocks=1 00:38:15.144 --rc geninfo_unexecuted_blocks=1 00:38:15.144 00:38:15.144 ' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:38:15.144 10:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:23.283 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:23.283 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:23.283 Found net devices under 0000:31:00.0: cvl_0_0 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:23.283 Found net devices under 0000:31:00.1: cvl_0_1 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:23.283 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:23.284 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:23.284 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:23.284 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:23.284 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:23.284 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:23.284 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:23.284 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:23.284 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:23.284 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:23.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:23.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:38:23.545 00:38:23.545 --- 10.0.0.2 ping statistics --- 00:38:23.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.545 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:23.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:23.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:38:23.545 00:38:23.545 --- 10.0.0.1 ping statistics --- 00:38:23.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.545 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:23.545 10:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4180762 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4180762 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 4180762 ']' 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:23.545 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:23.806 [2024-11-06 10:30:27.087155] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:23.806 [2024-11-06 10:30:27.088301] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:23.806 [2024-11-06 10:30:27.088355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.806 [2024-11-06 10:30:27.182539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:23.806 [2024-11-06 10:30:27.225014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.806 [2024-11-06 10:30:27.225049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.806 [2024-11-06 10:30:27.225057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.806 [2024-11-06 10:30:27.225064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.806 [2024-11-06 10:30:27.225070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.806 [2024-11-06 10:30:27.226570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:23.806 [2024-11-06 10:30:27.226664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:23.806 [2024-11-06 10:30:27.226821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.806 [2024-11-06 10:30:27.226822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:23.806 [2024-11-06 10:30:27.282220] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:23.806 [2024-11-06 10:30:27.282363] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:23.806 [2024-11-06 10:30:27.282764] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:23.806 [2024-11-06 10:30:27.283395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:23.806 [2024-11-06 10:30:27.283427] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:24.749 [2024-11-06 10:30:27.951312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:24.749 Malloc0 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.749 10:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:24.749 [2024-11-06 10:30:28.027464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:38:24.749 test case1: single bdev can't be used in multiple subsystems 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:24.749 [2024-11-06 10:30:28.063201] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:38:24.749 [2024-11-06 10:30:28.063221] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:38:24.749 [2024-11-06 10:30:28.063228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.749 request: 00:38:24.749 { 00:38:24.749 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:38:24.749 "namespace": { 00:38:24.749 "bdev_name": "Malloc0", 00:38:24.749 "no_auto_visible": false 00:38:24.749 }, 00:38:24.749 "method": "nvmf_subsystem_add_ns", 00:38:24.749 "req_id": 1 00:38:24.749 } 00:38:24.749 Got JSON-RPC error response 00:38:24.749 response: 00:38:24.749 { 00:38:24.749 "code": -32602, 00:38:24.749 "message": "Invalid parameters" 00:38:24.749 } 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:38:24.749 Adding namespace failed - expected result. 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:38:24.749 test case2: host connect to nvmf target in multiple paths 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:24.749 [2024-11-06 10:30:28.075314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.749 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:25.010 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:38:25.581 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:38:25.581 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:38:25.581 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:38:25.581 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:38:25.581 10:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:38:27.493 10:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:38:27.493 10:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:38:27.493 10:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:38:27.493 10:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:38:27.493 10:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:38:27.493 10:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:38:27.493 10:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:27.493 [global] 00:38:27.493 thread=1 00:38:27.493 invalidate=1 00:38:27.493 rw=write 00:38:27.493 time_based=1 00:38:27.493 runtime=1 00:38:27.493 ioengine=libaio 00:38:27.493 direct=1 00:38:27.493 bs=4096 00:38:27.493 iodepth=1 00:38:27.493 norandommap=0 00:38:27.493 numjobs=1 00:38:27.493 00:38:27.493 verify_dump=1 00:38:27.493 verify_backlog=512 00:38:27.493 verify_state_save=0 00:38:27.493 do_verify=1 00:38:27.493 verify=crc32c-intel 00:38:27.493 [job0] 00:38:27.493 filename=/dev/nvme0n1 00:38:27.493 Could not set queue depth (nvme0n1) 00:38:28.061 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:28.061 fio-3.35 00:38:28.061 Starting 1 thread 00:38:29.003 00:38:29.003 job0: (groupid=0, jobs=1): err= 0: pid=4181745: Wed Nov 6 10:30:32 2024 00:38:29.003 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:29.003 slat (nsec): min=8893, max=63804, avg=28338.19, stdev=2916.32 00:38:29.003 clat (usec): min=740, max=1287, avg=973.28, stdev=68.63 00:38:29.003 lat (usec): min=768, max=1315, avg=1001.62, stdev=68.65 00:38:29.003 clat percentiles (usec): 00:38:29.003 | 1.00th=[ 799], 5.00th=[ 857], 10.00th=[ 889], 20.00th=[ 930], 00:38:29.003 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:38:29.003 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1090], 00:38:29.003 | 99.00th=[ 1139], 99.50th=[ 1188], 99.90th=[ 1287], 99.95th=[ 1287], 00:38:29.003 | 99.99th=[ 1287] 00:38:29.003 write: IOPS=740, BW=2961KiB/s (3032kB/s)(2964KiB/1001msec); 0 zone resets 00:38:29.003 slat (usec): min=9, max=29378, avg=71.40, stdev=1078.14 00:38:29.003 clat (usec): min=233, max=980, avg=572.62, stdev=101.10 00:38:29.003 lat (usec): min=269, max=30088, avg=644.02, stdev=1088.25 00:38:29.003 clat percentiles (usec): 00:38:29.003 | 1.00th=[ 343], 5.00th=[ 404], 10.00th=[ 433], 20.00th=[ 490], 00:38:29.003 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:38:29.003 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 693], 95.00th=[ 734], 00:38:29.003 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 979], 99.95th=[ 979], 00:38:29.003 | 99.99th=[ 979] 00:38:29.003 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:38:29.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:29.003 lat (usec) : 250=0.08%, 500=13.81%, 750=43.58%, 1000=30.81% 00:38:29.003 lat (msec) : 2=11.73% 00:38:29.003 cpu : usr=1.60%, sys=6.10%, ctx=1256, majf=0, minf=1 00:38:29.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:29.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.003 issued rwts: total=512,741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:29.003 00:38:29.003 Run status group 0 (all jobs): 00:38:29.003 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:38:29.003 WRITE: bw=2961KiB/s (3032kB/s), 2961KiB/s-2961KiB/s (3032kB/s-3032kB/s), io=2964KiB (3035kB), run=1001-1001msec 00:38:29.003 00:38:29.003 Disk stats (read/write): 00:38:29.003 nvme0n1: ios=537/575, merge=0/0, ticks=1474/250, in_queue=1724, util=98.70% 00:38:29.003 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:29.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:29.264 rmmod nvme_tcp 00:38:29.264 rmmod nvme_fabrics 00:38:29.264 rmmod nvme_keyring 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4180762 ']' 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4180762 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 4180762 ']' 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 4180762 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4180762 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4180762' 00:38:29.264 killing process with pid 4180762 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 4180762 00:38:29.264 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 4180762 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:29.525 10:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.068 10:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:32.068 00:38:32.068 real 0m16.650s 00:38:32.068 user 0m35.714s 00:38:32.068 sys 0m8.194s 00:38:32.068 10:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:32.068 10:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:32.068 ************************************ 00:38:32.068 END TEST nvmf_nmic 00:38:32.068 ************************************ 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:32.068 ************************************ 00:38:32.068 START TEST nvmf_fio_target 00:38:32.068 ************************************ 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:32.068 * Looking for test storage... 00:38:32.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:32.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.068 --rc genhtml_branch_coverage=1 00:38:32.068 --rc genhtml_function_coverage=1 00:38:32.068 --rc genhtml_legend=1 00:38:32.068 --rc geninfo_all_blocks=1 00:38:32.068 --rc geninfo_unexecuted_blocks=1 00:38:32.068 00:38:32.068 ' 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:32.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.068 --rc genhtml_branch_coverage=1 00:38:32.068 --rc genhtml_function_coverage=1 00:38:32.068 --rc genhtml_legend=1 00:38:32.068 --rc geninfo_all_blocks=1 00:38:32.068 --rc geninfo_unexecuted_blocks=1 00:38:32.068 00:38:32.068 ' 00:38:32.068 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:32.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.068 --rc genhtml_branch_coverage=1 00:38:32.068 --rc genhtml_function_coverage=1 00:38:32.068 --rc genhtml_legend=1 00:38:32.069 --rc geninfo_all_blocks=1 00:38:32.069 --rc geninfo_unexecuted_blocks=1 00:38:32.069 00:38:32.069 ' 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:32.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.069 --rc genhtml_branch_coverage=1 00:38:32.069 --rc genhtml_function_coverage=1 00:38:32.069 --rc genhtml_legend=1 00:38:32.069 --rc geninfo_all_blocks=1 00:38:32.069 --rc geninfo_unexecuted_blocks=1 00:38:32.069 00:38:32.069 ' 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:38:32.069 10:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:40.208 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:40.209 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:40.209 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:40.209 Found net devices under 0000:31:00.0: cvl_0_0 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:40.209 Found net devices under 0000:31:00.1: cvl_0_1 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:40.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:40.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:38:40.209 00:38:40.209 --- 10.0.0.2 ping statistics --- 00:38:40.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:40.209 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:38:40.209 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:40.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:40.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:38:40.209 00:38:40.209 --- 10.0.0.1 ping statistics --- 00:38:40.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:40.209 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:38:40.469 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4186760 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4186760 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 4186760 ']' 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:40.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:40.470 10:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:40.470 [2024-11-06 10:30:43.824482] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:40.470 [2024-11-06 10:30:43.825954] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:40.470 [2024-11-06 10:30:43.826016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:40.470 [2024-11-06 10:30:43.916329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:40.470 [2024-11-06 10:30:43.957688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:40.470 [2024-11-06 10:30:43.957725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:40.470 [2024-11-06 10:30:43.957734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:40.470 [2024-11-06 10:30:43.957740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:40.470 [2024-11-06 10:30:43.957746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:40.470 [2024-11-06 10:30:43.959342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:40.470 [2024-11-06 10:30:43.959458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:40.470 [2024-11-06 10:30:43.959616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:40.470 [2024-11-06 10:30:43.959616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:40.730 [2024-11-06 10:30:44.015324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:40.730 [2024-11-06 10:30:44.015468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:40.730 [2024-11-06 10:30:44.016465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:40.730 [2024-11-06 10:30:44.017290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:40.730 [2024-11-06 10:30:44.017348] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:41.301 10:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:41.301 10:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:38:41.301 10:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:41.301 10:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:41.301 10:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:41.301 10:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:41.301 10:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:41.561 [2024-11-06 10:30:44.812115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:41.561 10:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:41.821 10:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:38:41.821 10:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:41.822 10:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:38:41.822 10:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:42.082 10:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:38:42.082 10:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:42.342 10:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:38:42.342 10:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:38:42.342 10:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:42.602 10:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:38:42.602 10:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:42.863 10:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:38:42.863 10:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:42.863 10:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:38:42.863 10:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:38:43.123 10:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:43.384 10:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:43.384 10:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:43.384 10:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:43.384 10:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:43.645 10:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:43.905 [2024-11-06 10:30:47.184237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:43.905 10:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:38:43.905 10:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:38:44.165 10:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:44.425 10:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:38:44.425 10:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:38:44.425 10:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:38:44.425 10:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:38:44.425 10:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:38:44.425 10:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:38:46.971 10:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:38:46.971 10:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:38:46.971 10:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:38:46.971 10:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:38:46.971 10:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:38:46.971 10:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:38:46.971 10:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:46.971 [global] 00:38:46.971 thread=1 00:38:46.971 invalidate=1 00:38:46.971 rw=write 00:38:46.971 time_based=1 00:38:46.971 runtime=1 00:38:46.971 ioengine=libaio 00:38:46.971 direct=1 00:38:46.971 bs=4096 00:38:46.971 iodepth=1 00:38:46.971 norandommap=0 00:38:46.971 numjobs=1 00:38:46.971 00:38:46.971 verify_dump=1 00:38:46.971 verify_backlog=512 00:38:46.971 verify_state_save=0 00:38:46.971 do_verify=1 00:38:46.971 verify=crc32c-intel 00:38:46.971 [job0] 00:38:46.971 filename=/dev/nvme0n1 00:38:46.971 [job1] 00:38:46.971 filename=/dev/nvme0n2 00:38:46.971 [job2] 00:38:46.971 filename=/dev/nvme0n3 00:38:46.971 [job3] 00:38:46.971 filename=/dev/nvme0n4 00:38:46.971 Could not set queue depth (nvme0n1) 00:38:46.971 Could not set queue depth (nvme0n2) 00:38:46.971 Could not set queue depth (nvme0n3) 00:38:46.971 Could not set queue depth (nvme0n4) 00:38:46.971 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:46.971 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:46.971 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:46.971 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:46.971 fio-3.35 00:38:46.971 Starting 4 threads 00:38:48.356 00:38:48.356 job0: (groupid=0, jobs=1): err= 0: pid=4188041: Wed Nov 6 10:30:51 2024 00:38:48.356 read: IOPS=18, BW=73.9KiB/s (75.7kB/s)(76.0KiB/1028msec) 00:38:48.356 slat (nsec): min=10180, max=27064, avg=25919.89, stdev=3814.95 00:38:48.356 clat (usec): min=40886, max=41922, avg=41029.21, stdev=231.57 00:38:48.356 lat (usec): min=40913, max=41949, avg=41055.12, stdev=230.81 00:38:48.356 clat percentiles (usec): 00:38:48.356 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:38:48.356 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:48.356 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:38:48.356 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:38:48.356 | 99.99th=[41681] 00:38:48.356 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:38:48.356 slat (nsec): min=9854, max=53832, avg=29437.27, stdev=10683.69 00:38:48.356 clat (usec): min=184, max=778, avg=446.80, stdev=95.72 00:38:48.356 lat (usec): min=194, max=797, avg=476.23, stdev=101.63 00:38:48.356 clat percentiles (usec): 00:38:48.356 | 1.00th=[ 253], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 367], 00:38:48.356 | 30.00th=[ 404], 40.00th=[ 437], 50.00th=[ 453], 60.00th=[ 482], 00:38:48.356 | 70.00th=[ 502], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 578], 00:38:48.356 | 99.00th=[ 693], 99.50th=[ 758], 99.90th=[ 775], 99.95th=[ 775], 00:38:48.356 | 99.99th=[ 775] 00:38:48.356 bw ( KiB/s): min= 4096, max= 4096, per=46.83%, avg=4096.00, stdev= 0.00, samples=1 00:38:48.356 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:48.356 lat (usec) : 250=0.94%, 500=65.91%, 750=29.00%, 1000=0.56% 00:38:48.356 lat (msec) : 50=3.58% 00:38:48.356 cpu : usr=0.88%, sys=1.27%, ctx=533, majf=0, minf=1 00:38:48.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:48.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.356 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:48.356 job1: (groupid=0, jobs=1): err= 0: pid=4188045: Wed Nov 6 10:30:51 2024 00:38:48.356 read: IOPS=18, BW=74.6KiB/s (76.4kB/s)(76.0KiB/1019msec) 00:38:48.356 slat (nsec): min=26094, max=26873, avg=26681.63, stdev=183.94 00:38:48.356 clat (usec): min=40860, max=41374, avg=40983.51, stdev=107.02 00:38:48.356 lat (usec): min=40887, max=41400, avg=41010.20, stdev=106.89 00:38:48.356 clat percentiles (usec): 00:38:48.356 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:38:48.356 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:48.356 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:48.356 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:48.356 | 99.99th=[41157] 00:38:48.356 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:38:48.356 slat (nsec): min=9761, max=60176, avg=27400.04, stdev=11726.32 00:38:48.356 clat (usec): min=213, max=671, avg=433.39, stdev=76.33 00:38:48.356 lat (usec): min=247, max=682, avg=460.79, stdev=82.69 00:38:48.356 clat percentiles (usec): 00:38:48.356 | 1.00th=[ 269], 5.00th=[ 306], 10.00th=[ 334], 20.00th=[ 351], 00:38:48.356 | 30.00th=[ 379], 40.00th=[ 433], 50.00th=[ 453], 60.00th=[ 469], 00:38:48.356 | 70.00th=[ 482], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 545], 00:38:48.356 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 668], 99.95th=[ 668], 00:38:48.356 | 99.99th=[ 668] 00:38:48.356 bw ( KiB/s): min= 4096, max= 4096, per=46.83%, avg=4096.00, stdev= 0.00, samples=1 00:38:48.356 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:48.356 lat (usec) : 250=0.19%, 500=79.66%, 750=16.57% 00:38:48.356 lat (msec) : 50=3.58% 00:38:48.356 cpu : usr=0.69%, sys=1.28%, ctx=532, majf=0, minf=1 00:38:48.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:48.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.356 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:48.356 job2: (groupid=0, jobs=1): err= 0: pid=4188056: Wed Nov 6 10:30:51 2024 00:38:48.356 read: IOPS=15, BW=62.6KiB/s (64.1kB/s)(64.0KiB/1023msec) 00:38:48.356 slat (nsec): min=27891, max=28793, avg=28185.56, stdev=227.13 00:38:48.356 clat (usec): min=40973, max=42073, avg=41617.37, stdev=413.40 00:38:48.356 lat (usec): min=41001, max=42101, avg=41645.56, stdev=413.45 00:38:48.356 clat percentiles (usec): 00:38:48.356 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:48.356 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:38:48.356 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:48.356 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:48.356 | 99.99th=[42206] 00:38:48.356 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:38:48.356 slat (usec): min=9, max=2354, avg=39.13, stdev=109.16 00:38:48.356 clat (usec): min=261, max=1233, avg=647.85, stdev=142.00 00:38:48.356 lat (usec): min=275, max=3587, avg=686.98, stdev=199.92 00:38:48.356 clat percentiles (usec): 00:38:48.356 | 1.00th=[ 326], 5.00th=[ 396], 10.00th=[ 457], 20.00th=[ 537], 00:38:48.356 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 685], 00:38:48.356 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 873], 00:38:48.356 | 99.00th=[ 1012], 99.50th=[ 1074], 99.90th=[ 1237], 99.95th=[ 1237], 00:38:48.356 | 99.99th=[ 1237] 00:38:48.356 bw ( KiB/s): min= 4096, max= 4096, per=46.83%, avg=4096.00, stdev= 0.00, samples=1 00:38:48.356 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:48.356 lat (usec) : 500=14.02%, 750=64.20%, 1000=17.23% 00:38:48.356 lat (msec) : 2=1.52%, 50=3.03% 00:38:48.356 cpu : usr=0.78%, sys=2.25%, ctx=532, majf=0, minf=1 00:38:48.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:48.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.356 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:48.356 job3: (groupid=0, jobs=1): err= 0: pid=4188060: Wed Nov 6 10:30:51 2024 00:38:48.356 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:48.356 slat (nsec): min=7902, max=57216, avg=26407.28, stdev=2738.15 00:38:48.356 clat (usec): min=625, max=1215, avg=962.13, stdev=76.13 00:38:48.356 lat (usec): min=651, max=1241, avg=988.53, stdev=76.12 00:38:48.356 clat percentiles (usec): 00:38:48.356 | 1.00th=[ 717], 5.00th=[ 816], 10.00th=[ 881], 20.00th=[ 922], 00:38:48.356 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 979], 00:38:48.356 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1074], 00:38:48.356 | 99.00th=[ 1123], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1221], 00:38:48.356 | 99.99th=[ 1221] 00:38:48.356 write: IOPS=711, BW=2845KiB/s (2913kB/s)(2848KiB/1001msec); 0 zone resets 00:38:48.357 slat (usec): min=4, max=2873, avg=35.21, stdev=107.09 00:38:48.357 clat (usec): min=245, max=1469, avg=645.94, stdev=130.30 00:38:48.357 lat (usec): min=259, max=3279, avg=681.15, stdev=165.90 00:38:48.357 clat percentiles (usec): 00:38:48.357 | 1.00th=[ 326], 5.00th=[ 429], 10.00th=[ 482], 20.00th=[ 545], 00:38:48.357 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 660], 60.00th=[ 685], 00:38:48.357 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 840], 00:38:48.357 | 99.00th=[ 947], 99.50th=[ 1004], 99.90th=[ 1467], 99.95th=[ 1467], 00:38:48.357 | 99.99th=[ 1467] 00:38:48.357 bw ( KiB/s): min= 4096, max= 4096, per=46.83%, avg=4096.00, stdev= 0.00, samples=1 00:38:48.357 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:48.357 lat (usec) : 250=0.08%, 500=7.35%, 750=40.77%, 1000=40.52% 00:38:48.357 lat (msec) : 2=11.27% 00:38:48.357 cpu : usr=1.70%, sys=3.70%, ctx=1229, majf=0, minf=1 00:38:48.357 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:48.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.357 issued rwts: total=512,712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.357 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:48.357 00:38:48.357 Run status group 0 (all jobs): 00:38:48.357 READ: bw=2202KiB/s (2255kB/s), 62.6KiB/s-2046KiB/s (64.1kB/s-2095kB/s), io=2264KiB (2318kB), run=1001-1028msec 00:38:48.357 WRITE: bw=8747KiB/s (8957kB/s), 1992KiB/s-2845KiB/s (2040kB/s-2913kB/s), io=8992KiB (9208kB), run=1001-1028msec 00:38:48.357 00:38:48.357 Disk stats (read/write): 00:38:48.357 nvme0n1: ios=37/512, merge=0/0, ticks=1439/231, in_queue=1670, util=85.47% 00:38:48.357 nvme0n2: ios=36/512, merge=0/0, ticks=1453/219, in_queue=1672, util=89.30% 00:38:48.357 nvme0n3: ios=61/512, merge=0/0, ticks=568/273, in_queue=841, util=95.40% 00:38:48.357 nvme0n4: ios=542/512, merge=0/0, ticks=576/328, in_queue=904, util=96.20% 00:38:48.357 10:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:38:48.357 [global] 00:38:48.357 thread=1 00:38:48.357 invalidate=1 00:38:48.357 rw=randwrite 00:38:48.357 time_based=1 00:38:48.357 runtime=1 00:38:48.357 ioengine=libaio 00:38:48.357 direct=1 00:38:48.357 bs=4096 00:38:48.357 iodepth=1 00:38:48.357 norandommap=0 00:38:48.357 numjobs=1 00:38:48.357 00:38:48.357 verify_dump=1 00:38:48.357 verify_backlog=512 00:38:48.357 verify_state_save=0 00:38:48.357 do_verify=1 00:38:48.357 verify=crc32c-intel 00:38:48.357 [job0] 00:38:48.357 filename=/dev/nvme0n1 00:38:48.357 [job1] 00:38:48.357 filename=/dev/nvme0n2 00:38:48.357 [job2] 00:38:48.357 filename=/dev/nvme0n3 00:38:48.357 [job3] 00:38:48.357 filename=/dev/nvme0n4 00:38:48.357 Could not set queue depth (nvme0n1) 00:38:48.357 Could not set queue depth (nvme0n2) 00:38:48.357 Could not set queue depth (nvme0n3) 00:38:48.357 Could not set queue depth (nvme0n4) 00:38:48.618 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:48.618 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:48.618 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:48.618 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:48.618 fio-3.35 00:38:48.618 Starting 4 threads 00:38:50.006 00:38:50.006 job0: (groupid=0, jobs=1): err= 0: pid=4188548: Wed Nov 6 10:30:53 2024 00:38:50.006 read: IOPS=59, BW=238KiB/s (243kB/s)(240KiB/1010msec) 00:38:50.006 slat (nsec): min=14858, max=44613, avg=26994.30, stdev=3634.05 00:38:50.006 clat (usec): min=746, max=42014, avg=12772.29, stdev=18297.77 00:38:50.006 lat (usec): min=773, max=42041, avg=12799.29, stdev=18297.60 00:38:50.006 clat percentiles (usec): 00:38:50.006 | 1.00th=[ 750], 5.00th=[ 857], 10.00th=[ 914], 20.00th=[ 963], 00:38:50.006 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:38:50.006 | 70.00th=[ 1172], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:38:50.006 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:50.006 | 99.99th=[42206] 00:38:50.006 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:38:50.006 slat (nsec): min=9738, max=51967, avg=26702.80, stdev=10983.60 00:38:50.006 clat (usec): min=191, max=736, avg=437.05, stdev=80.09 00:38:50.006 lat (usec): min=225, max=769, avg=463.76, stdev=85.60 00:38:50.006 clat percentiles (usec): 00:38:50.006 | 1.00th=[ 269], 5.00th=[ 293], 10.00th=[ 330], 20.00th=[ 355], 00:38:50.006 | 30.00th=[ 392], 40.00th=[ 441], 50.00th=[ 457], 60.00th=[ 469], 00:38:50.006 | 70.00th=[ 486], 80.00th=[ 498], 90.00th=[ 523], 95.00th=[ 545], 00:38:50.006 | 99.00th=[ 603], 99.50th=[ 709], 99.90th=[ 734], 99.95th=[ 734], 00:38:50.006 | 99.99th=[ 734] 00:38:50.006 bw ( KiB/s): min= 4096, max= 4096, per=41.16%, avg=4096.00, stdev= 0.00, samples=1 00:38:50.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:50.006 lat (usec) : 250=0.52%, 500=72.55%, 750=16.61%, 1000=2.27% 00:38:50.006 lat (msec) : 2=4.90%, 50=3.15% 00:38:50.006 cpu : usr=1.19%, sys=1.09%, ctx=575, majf=0, minf=1 00:38:50.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:50.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.006 issued rwts: total=60,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:50.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:50.006 job1: (groupid=0, jobs=1): err= 0: pid=4188549: Wed Nov 6 10:30:53 2024 00:38:50.006 read: IOPS=660, BW=2641KiB/s (2705kB/s)(2644KiB/1001msec) 00:38:50.006 slat (nsec): min=6881, max=60188, avg=25105.27, stdev=6701.41 00:38:50.006 clat (usec): min=318, max=1120, avg=792.04, stdev=125.55 00:38:50.006 lat (usec): min=326, max=1140, avg=817.15, stdev=126.88 00:38:50.006 clat percentiles (usec): 00:38:50.006 | 1.00th=[ 457], 5.00th=[ 586], 10.00th=[ 627], 20.00th=[ 685], 00:38:50.006 | 30.00th=[ 734], 40.00th=[ 766], 50.00th=[ 807], 60.00th=[ 840], 00:38:50.006 | 70.00th=[ 865], 80.00th=[ 898], 90.00th=[ 947], 95.00th=[ 971], 00:38:50.006 | 99.00th=[ 1045], 99.50th=[ 1057], 99.90th=[ 1123], 99.95th=[ 1123], 00:38:50.006 | 99.99th=[ 1123] 00:38:50.006 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:38:50.006 slat (nsec): min=3388, max=52987, avg=26947.97, stdev=11150.57 00:38:50.006 clat (usec): min=100, max=972, avg=410.09, stdev=130.04 00:38:50.006 lat (usec): min=104, max=1005, avg=437.04, stdev=130.10 00:38:50.006 clat percentiles (usec): 00:38:50.006 | 1.00th=[ 123], 5.00th=[ 219], 10.00th=[ 253], 20.00th=[ 310], 00:38:50.006 | 30.00th=[ 330], 40.00th=[ 355], 50.00th=[ 408], 60.00th=[ 433], 00:38:50.006 | 70.00th=[ 478], 80.00th=[ 519], 90.00th=[ 570], 95.00th=[ 619], 00:38:50.006 | 99.00th=[ 775], 99.50th=[ 832], 99.90th=[ 898], 99.95th=[ 971], 00:38:50.006 | 99.99th=[ 971] 00:38:50.006 bw ( KiB/s): min= 4096, max= 4096, per=41.16%, avg=4096.00, stdev= 0.00, samples=1 00:38:50.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:50.006 lat (usec) : 250=5.82%, 500=40.24%, 750=27.66%, 1000=25.34% 00:38:50.006 lat (msec) : 2=0.95% 00:38:50.006 cpu : usr=1.70%, sys=5.10%, ctx=1687, majf=0, minf=1 00:38:50.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:50.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.006 issued rwts: total=661,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:50.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:50.006 job2: (groupid=0, jobs=1): err= 0: pid=4188551: Wed Nov 6 10:30:53 2024 00:38:50.006 read: IOPS=17, BW=70.0KiB/s (71.7kB/s)(72.0KiB/1029msec) 00:38:50.006 slat (nsec): min=25165, max=25947, avg=25566.72, stdev=244.74 00:38:50.006 clat (usec): min=1122, max=42053, avg=39588.49, stdev=9604.68 00:38:50.006 lat (usec): min=1148, max=42079, avg=39614.06, stdev=9604.75 00:38:50.006 clat percentiles (usec): 00:38:50.006 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41157], 20.00th=[41681], 00:38:50.006 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:38:50.006 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:50.006 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:50.006 | 99.99th=[42206] 00:38:50.006 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:38:50.006 slat (nsec): min=9065, max=70754, avg=25181.38, stdev=10980.73 00:38:50.006 clat (usec): min=124, max=1004, avg=584.88, stdev=164.49 00:38:50.006 lat (usec): min=134, max=1036, avg=610.06, stdev=169.47 00:38:50.006 clat percentiles (usec): 00:38:50.006 | 1.00th=[ 155], 5.00th=[ 273], 10.00th=[ 371], 20.00th=[ 453], 00:38:50.006 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:38:50.006 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 832], 00:38:50.006 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 1004], 99.95th=[ 1004], 00:38:50.006 | 99.99th=[ 1004] 00:38:50.006 bw ( KiB/s): min= 4096, max= 4096, per=41.16%, avg=4096.00, stdev= 0.00, samples=1 00:38:50.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:50.006 lat (usec) : 250=4.15%, 500=20.75%, 750=58.11%, 1000=13.40% 00:38:50.006 lat (msec) : 2=0.38%, 50=3.21% 00:38:50.006 cpu : usr=0.78%, sys=1.17%, ctx=531, majf=0, minf=2 00:38:50.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:50.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.006 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:50.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:50.006 job3: (groupid=0, jobs=1): err= 0: pid=4188554: Wed Nov 6 10:30:53 2024 00:38:50.006 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:38:50.006 slat (nsec): min=26140, max=29709, avg=27682.19, stdev=683.14 00:38:50.006 clat (usec): min=468, max=41530, avg=35250.48, stdev=14375.88 00:38:50.006 lat (usec): min=496, max=41558, avg=35278.16, stdev=14375.82 00:38:50.006 clat percentiles (usec): 00:38:50.006 | 1.00th=[ 469], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[40633], 00:38:50.006 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:50.006 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:50.006 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:38:50.006 | 99.99th=[41681] 00:38:50.006 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:38:50.006 slat (nsec): min=9200, max=65679, avg=31794.98, stdev=9470.24 00:38:50.006 clat (usec): min=118, max=2066, avg=476.61, stdev=157.28 00:38:50.006 lat (usec): min=130, max=2101, avg=508.41, stdev=160.27 00:38:50.007 clat percentiles (usec): 00:38:50.007 | 1.00th=[ 141], 5.00th=[ 235], 10.00th=[ 281], 20.00th=[ 359], 00:38:50.007 | 30.00th=[ 396], 40.00th=[ 437], 50.00th=[ 474], 60.00th=[ 519], 00:38:50.007 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 660], 95.00th=[ 693], 00:38:50.007 | 99.00th=[ 750], 99.50th=[ 775], 99.90th=[ 2073], 99.95th=[ 2073], 00:38:50.007 | 99.99th=[ 2073] 00:38:50.007 bw ( KiB/s): min= 4096, max= 4096, per=41.16%, avg=4096.00, stdev= 0.00, samples=1 00:38:50.007 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:50.007 lat (usec) : 250=5.82%, 500=47.84%, 750=41.46%, 1000=0.94% 00:38:50.007 lat (msec) : 2=0.38%, 4=0.19%, 50=3.38% 00:38:50.007 cpu : usr=0.90%, sys=2.29%, ctx=535, majf=0, minf=1 00:38:50.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:50.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.007 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:50.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:50.007 00:38:50.007 Run status group 0 (all jobs): 00:38:50.007 READ: bw=2954KiB/s (3025kB/s), 70.0KiB/s-2641KiB/s (71.7kB/s-2705kB/s), io=3040KiB (3113kB), run=1001-1029msec 00:38:50.007 WRITE: bw=9951KiB/s (10.2MB/s), 1990KiB/s-4092KiB/s (2038kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1029msec 00:38:50.007 00:38:50.007 Disk stats (read/write): 00:38:50.007 nvme0n1: ios=105/512, merge=0/0, ticks=691/221, in_queue=912, util=88.38% 00:38:50.007 nvme0n2: ios=535/928, merge=0/0, ticks=1306/379, in_queue=1685, util=92.46% 00:38:50.007 nvme0n3: ios=70/512, merge=0/0, ticks=623/279, in_queue=902, util=95.99% 00:38:50.007 nvme0n4: ios=53/512, merge=0/0, ticks=1364/190, in_queue=1554, util=97.76% 00:38:50.007 10:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:38:50.007 [global] 00:38:50.007 thread=1 00:38:50.007 invalidate=1 00:38:50.007 rw=write 00:38:50.007 time_based=1 00:38:50.007 runtime=1 00:38:50.007 ioengine=libaio 00:38:50.007 direct=1 00:38:50.007 bs=4096 00:38:50.007 iodepth=128 00:38:50.007 norandommap=0 00:38:50.007 numjobs=1 00:38:50.007 00:38:50.007 verify_dump=1 00:38:50.007 verify_backlog=512 00:38:50.007 verify_state_save=0 00:38:50.007 do_verify=1 00:38:50.007 verify=crc32c-intel 00:38:50.007 [job0] 00:38:50.007 filename=/dev/nvme0n1 00:38:50.007 [job1] 00:38:50.007 filename=/dev/nvme0n2 00:38:50.007 [job2] 00:38:50.007 filename=/dev/nvme0n3 00:38:50.007 [job3] 00:38:50.007 filename=/dev/nvme0n4 00:38:50.007 Could not set queue depth (nvme0n1) 00:38:50.007 Could not set queue depth (nvme0n2) 00:38:50.007 Could not set queue depth (nvme0n3) 00:38:50.007 Could not set queue depth (nvme0n4) 00:38:50.268 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:50.268 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:50.268 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:50.268 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:50.268 fio-3.35 00:38:50.268 Starting 4 threads 00:38:51.654 00:38:51.654 job0: (groupid=0, jobs=1): err= 0: pid=4189068: Wed Nov 6 10:30:54 2024 00:38:51.654 read: IOPS=6315, BW=24.7MiB/s (25.9MB/s)(24.8MiB/1006msec) 00:38:51.654 slat (nsec): min=944, max=11046k, avg=68725.25, stdev=487749.70 00:38:51.654 clat (usec): min=1942, max=38471, avg=9435.16, stdev=4887.45 00:38:51.654 lat (usec): min=3546, max=38478, avg=9503.89, stdev=4919.75 00:38:51.654 clat percentiles (usec): 00:38:51.654 | 1.00th=[ 4490], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 6521], 00:38:51.654 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7832], 60.00th=[ 9110], 00:38:51.654 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[14222], 95.00th=[21627], 00:38:51.654 | 99.00th=[31065], 99.50th=[33162], 99.90th=[38536], 99.95th=[38536], 00:38:51.654 | 99.99th=[38536] 00:38:51.654 write: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec); 0 zone resets 00:38:51.654 slat (nsec): min=1640, max=14403k, avg=80164.13, stdev=665104.07 00:38:51.654 clat (usec): min=3324, max=41830, avg=10098.24, stdev=6150.08 00:38:51.654 lat (usec): min=3339, max=41863, avg=10178.40, stdev=6225.95 00:38:51.654 clat percentiles (usec): 00:38:51.654 | 1.00th=[ 4359], 5.00th=[ 5538], 10.00th=[ 5800], 20.00th=[ 6652], 00:38:51.654 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7439], 60.00th=[ 8291], 00:38:51.654 | 70.00th=[ 8848], 80.00th=[12518], 90.00th=[20841], 95.00th=[24249], 00:38:51.654 | 99.00th=[29754], 99.50th=[32113], 99.90th=[33817], 99.95th=[38011], 00:38:51.654 | 99.99th=[41681] 00:38:51.654 bw ( KiB/s): min=20480, max=32768, per=28.82%, avg=26624.00, stdev=8688.93, samples=2 00:38:51.654 iops : min= 5120, max= 8192, avg=6656.00, stdev=2172.23, samples=2 00:38:51.654 lat (msec) : 2=0.01%, 4=0.33%, 10=77.36%, 20=13.36%, 50=8.94% 00:38:51.654 cpu : usr=4.38%, sys=6.57%, ctx=512, majf=0, minf=1 00:38:51.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:38:51.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:51.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:51.654 issued rwts: total=6353,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:51.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:51.654 job1: (groupid=0, jobs=1): err= 0: pid=4189069: Wed Nov 6 10:30:54 2024 00:38:51.654 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:38:51.654 slat (nsec): min=975, max=26297k, avg=96587.90, stdev=866022.99 00:38:51.654 clat (usec): min=6075, max=48518, avg=12321.13, stdev=5521.32 00:38:51.654 lat (usec): min=6081, max=48547, avg=12417.72, stdev=5596.26 00:38:51.654 clat percentiles (usec): 00:38:51.654 | 1.00th=[ 6194], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 8848], 00:38:51.654 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10683], 00:38:51.654 | 70.00th=[12911], 80.00th=[15401], 90.00th=[20317], 95.00th=[25297], 00:38:51.654 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36439], 99.95th=[43254], 00:38:51.654 | 99.99th=[48497] 00:38:51.654 write: IOPS=5185, BW=20.3MiB/s (21.2MB/s)(20.4MiB/1007msec); 0 zone resets 00:38:51.654 slat (nsec): min=1645, max=14674k, avg=91325.96, stdev=709809.03 00:38:51.654 clat (usec): min=1133, max=98718, avg=12384.77, stdev=12354.58 00:38:51.654 lat (usec): min=1142, max=98726, avg=12476.09, stdev=12431.67 00:38:51.654 clat percentiles (usec): 00:38:51.654 | 1.00th=[ 4817], 5.00th=[ 5866], 10.00th=[ 5997], 20.00th=[ 7570], 00:38:51.654 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[ 9765], 00:38:51.654 | 70.00th=[10552], 80.00th=[12649], 90.00th=[17695], 95.00th=[23200], 00:38:51.655 | 99.00th=[84411], 99.50th=[95945], 99.90th=[99091], 99.95th=[99091], 00:38:51.655 | 99.99th=[99091] 00:38:51.655 bw ( KiB/s): min=16376, max=24584, per=22.17%, avg=20480.00, stdev=5803.93, samples=2 00:38:51.655 iops : min= 4094, max= 6146, avg=5120.00, stdev=1450.98, samples=2 00:38:51.655 lat (msec) : 2=0.22%, 4=0.12%, 10=60.25%, 20=30.39%, 50=7.56% 00:38:51.655 lat (msec) : 100=1.46% 00:38:51.655 cpu : usr=3.88%, sys=5.37%, ctx=292, majf=0, minf=2 00:38:51.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:38:51.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:51.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:51.655 issued rwts: total=5120,5222,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:51.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:51.655 job2: (groupid=0, jobs=1): err= 0: pid=4189070: Wed Nov 6 10:30:54 2024 00:38:51.655 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:38:51.655 slat (nsec): min=975, max=29788k, avg=97626.47, stdev=865550.13 00:38:51.655 clat (usec): min=2948, max=76145, avg=13747.14, stdev=10141.38 00:38:51.655 lat (usec): min=2954, max=76172, avg=13844.77, stdev=10227.73 00:38:51.655 clat percentiles (usec): 00:38:51.655 | 1.00th=[ 3818], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 8979], 00:38:51.655 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10814], 60.00th=[11863], 00:38:51.655 | 70.00th=[12780], 80.00th=[14353], 90.00th=[22152], 95.00th=[38536], 00:38:51.655 | 99.00th=[64226], 99.50th=[64226], 99.90th=[64750], 99.95th=[64750], 00:38:51.655 | 99.99th=[76022] 00:38:51.655 write: IOPS=4699, BW=18.4MiB/s (19.2MB/s)(18.4MiB/1005msec); 0 zone resets 00:38:51.655 slat (nsec): min=1701, max=13965k, avg=84117.42, stdev=591172.99 00:38:51.655 clat (usec): min=1254, max=79788, avg=13568.07, stdev=10335.03 00:38:51.655 lat (usec): min=1265, max=79790, avg=13652.19, stdev=10389.97 00:38:51.655 clat percentiles (usec): 00:38:51.655 | 1.00th=[ 2769], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 6718], 00:38:51.655 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8586], 60.00th=[10290], 00:38:51.655 | 70.00th=[13698], 80.00th=[24511], 90.00th=[28443], 95.00th=[32375], 00:38:51.655 | 99.00th=[46924], 99.50th=[60031], 99.90th=[73925], 99.95th=[73925], 00:38:51.655 | 99.99th=[80217] 00:38:51.655 bw ( KiB/s): min=16440, max=20480, per=19.98%, avg=18460.00, stdev=2856.71, samples=2 00:38:51.655 iops : min= 4110, max= 5120, avg=4615.00, stdev=714.18, samples=2 00:38:51.655 lat (msec) : 2=0.20%, 4=2.56%, 10=46.78%, 20=31.97%, 50=16.69% 00:38:51.655 lat (msec) : 100=1.80% 00:38:51.655 cpu : usr=2.59%, sys=6.47%, ctx=363, majf=0, minf=1 00:38:51.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:38:51.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:51.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:51.655 issued rwts: total=4608,4723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:51.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:51.655 job3: (groupid=0, jobs=1): err= 0: pid=4189071: Wed Nov 6 10:30:54 2024 00:38:51.655 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:38:51.655 slat (nsec): min=1016, max=21131k, avg=75200.55, stdev=642004.90 00:38:51.655 clat (usec): min=3903, max=38845, avg=10033.31, stdev=4032.85 00:38:51.655 lat (usec): min=3910, max=38873, avg=10108.51, stdev=4080.04 00:38:51.655 clat percentiles (usec): 00:38:51.655 | 1.00th=[ 4752], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 7111], 00:38:51.655 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9634], 00:38:51.655 | 70.00th=[10814], 80.00th=[12256], 90.00th=[15664], 95.00th=[18482], 00:38:51.655 | 99.00th=[23200], 99.50th=[23200], 99.90th=[28967], 99.95th=[28967], 00:38:51.655 | 99.99th=[39060] 00:38:51.655 write: IOPS=6639, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:38:51.655 slat (nsec): min=1740, max=24814k, avg=68426.46, stdev=624750.70 00:38:51.655 clat (usec): min=1233, max=40507, avg=9044.24, stdev=4591.24 00:38:51.655 lat (usec): min=1739, max=40541, avg=9112.67, stdev=4625.68 00:38:51.655 clat percentiles (usec): 00:38:51.655 | 1.00th=[ 3458], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 6325], 00:38:51.655 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7963], 00:38:51.655 | 70.00th=[ 9503], 80.00th=[10683], 90.00th=[13698], 95.00th=[17433], 00:38:51.655 | 99.00th=[27395], 99.50th=[27657], 99.90th=[27919], 99.95th=[27919], 00:38:51.655 | 99.99th=[40633] 00:38:51.655 bw ( KiB/s): min=21872, max=31376, per=28.82%, avg=26624.00, stdev=6720.34, samples=2 00:38:51.655 iops : min= 5468, max= 7844, avg=6656.00, stdev=1680.09, samples=2 00:38:51.655 lat (msec) : 2=0.09%, 4=0.64%, 10=67.75%, 20=27.03%, 50=4.49% 00:38:51.655 cpu : usr=5.99%, sys=6.19%, ctx=356, majf=0, minf=1 00:38:51.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:38:51.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:51.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:51.655 issued rwts: total=6656,6659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:51.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:51.655 00:38:51.655 Run status group 0 (all jobs): 00:38:51.655 READ: bw=88.2MiB/s (92.5MB/s), 17.9MiB/s-25.9MiB/s (18.8MB/s-27.2MB/s), io=88.8MiB (93.1MB), run=1003-1007msec 00:38:51.655 WRITE: bw=90.2MiB/s (94.6MB/s), 18.4MiB/s-25.9MiB/s (19.2MB/s-27.2MB/s), io=90.9MiB (95.3MB), run=1003-1007msec 00:38:51.655 00:38:51.655 Disk stats (read/write): 00:38:51.655 nvme0n1: ios=5762/6144, merge=0/0, ticks=24189/25530, in_queue=49719, util=84.57% 00:38:51.655 nvme0n2: ios=3987/4096, merge=0/0, ticks=49128/53355, in_queue=102483, util=90.32% 00:38:51.655 nvme0n3: ios=3604/3959, merge=0/0, ticks=41692/49611, in_queue=91303, util=91.98% 00:38:51.655 nvme0n4: ios=5178/5445, merge=0/0, ticks=52488/48616, in_queue=101104, util=93.92% 00:38:51.655 10:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:38:51.655 [global] 00:38:51.655 thread=1 00:38:51.655 invalidate=1 00:38:51.655 rw=randwrite 00:38:51.655 time_based=1 00:38:51.655 runtime=1 00:38:51.655 ioengine=libaio 00:38:51.655 direct=1 00:38:51.655 bs=4096 00:38:51.655 iodepth=128 00:38:51.655 norandommap=0 00:38:51.655 numjobs=1 00:38:51.655 00:38:51.655 verify_dump=1 00:38:51.655 verify_backlog=512 00:38:51.655 verify_state_save=0 00:38:51.655 do_verify=1 00:38:51.655 verify=crc32c-intel 00:38:51.655 [job0] 00:38:51.655 filename=/dev/nvme0n1 00:38:51.655 [job1] 00:38:51.655 filename=/dev/nvme0n2 00:38:51.655 [job2] 00:38:51.655 filename=/dev/nvme0n3 00:38:51.655 [job3] 00:38:51.655 filename=/dev/nvme0n4 00:38:51.655 Could not set queue depth (nvme0n1) 00:38:51.655 Could not set queue depth (nvme0n2) 00:38:51.655 Could not set queue depth (nvme0n3) 00:38:51.655 Could not set queue depth (nvme0n4) 00:38:51.916 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:51.916 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:51.916 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:51.916 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:51.916 fio-3.35 00:38:51.916 Starting 4 threads 00:38:53.301 00:38:53.301 job0: (groupid=0, jobs=1): err= 0: pid=4189597: Wed Nov 6 10:30:56 2024 00:38:53.301 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:38:53.301 slat (nsec): min=934, max=24298k, avg=158058.60, stdev=1229087.89 00:38:53.301 clat (usec): min=5521, max=68362, avg=20649.25, stdev=17456.33 00:38:53.301 lat (usec): min=5527, max=68371, avg=20807.31, stdev=17554.54 00:38:53.301 clat percentiles (usec): 00:38:53.301 | 1.00th=[ 5735], 5.00th=[ 7111], 10.00th=[ 8717], 20.00th=[ 9372], 00:38:53.301 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10945], 60.00th=[13829], 00:38:53.301 | 70.00th=[18220], 80.00th=[36963], 90.00th=[53216], 95.00th=[61604], 00:38:53.301 | 99.00th=[64226], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:38:53.301 | 99.99th=[68682] 00:38:53.301 write: IOPS=3876, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1003msec); 0 zone resets 00:38:53.301 slat (nsec): min=1584, max=10791k, avg=106610.71, stdev=521991.00 00:38:53.301 clat (usec): min=609, max=43850, avg=13577.81, stdev=6300.83 00:38:53.301 lat (usec): min=613, max=43859, avg=13684.42, stdev=6346.56 00:38:53.301 clat percentiles (usec): 00:38:53.301 | 1.00th=[ 3425], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 8225], 00:38:53.301 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[11731], 60.00th=[13698], 00:38:53.301 | 70.00th=[17433], 80.00th=[19006], 90.00th=[21103], 95.00th=[24511], 00:38:53.301 | 99.00th=[34866], 99.50th=[40633], 99.90th=[42730], 99.95th=[43779], 00:38:53.301 | 99.99th=[43779] 00:38:53.301 bw ( KiB/s): min= 9608, max=20480, per=17.49%, avg=15044.00, stdev=7687.66, samples=2 00:38:53.301 iops : min= 2402, max= 5120, avg=3761.00, stdev=1921.92, samples=2 00:38:53.301 lat (usec) : 750=0.08% 00:38:53.301 lat (msec) : 2=0.24%, 4=0.58%, 10=36.32%, 20=42.53%, 50=13.96% 00:38:53.301 lat (msec) : 100=6.29% 00:38:53.301 cpu : usr=1.80%, sys=3.49%, ctx=408, majf=0, minf=1 00:38:53.301 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:38:53.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:53.301 issued rwts: total=3584,3888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.301 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:53.301 job1: (groupid=0, jobs=1): err= 0: pid=4189599: Wed Nov 6 10:30:56 2024 00:38:53.301 read: IOPS=6588, BW=25.7MiB/s (27.0MB/s)(26.9MiB/1044msec) 00:38:53.301 slat (nsec): min=981, max=18691k, avg=70038.44, stdev=595110.75 00:38:53.301 clat (usec): min=2037, max=62990, avg=10029.04, stdev=8906.08 00:38:53.301 lat (usec): min=2046, max=63001, avg=10099.08, stdev=8951.43 00:38:53.301 clat percentiles (usec): 00:38:53.301 | 1.00th=[ 3818], 5.00th=[ 5211], 10.00th=[ 5604], 20.00th=[ 6194], 00:38:53.301 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7111], 60.00th=[ 7635], 00:38:53.301 | 70.00th=[ 8029], 80.00th=[10290], 90.00th=[16450], 95.00th=[26870], 00:38:53.301 | 99.00th=[52167], 99.50th=[55837], 99.90th=[63177], 99.95th=[63177], 00:38:53.301 | 99.99th=[63177] 00:38:53.301 write: IOPS=6865, BW=26.8MiB/s (28.1MB/s)(28.0MiB/1044msec); 0 zone resets 00:38:53.301 slat (nsec): min=1604, max=19695k, avg=65248.74, stdev=507485.38 00:38:53.301 clat (usec): min=551, max=60519, avg=8835.81, stdev=6774.91 00:38:53.301 lat (usec): min=555, max=60525, avg=8901.06, stdev=6809.31 00:38:53.301 clat percentiles (usec): 00:38:53.301 | 1.00th=[ 2606], 5.00th=[ 4015], 10.00th=[ 4293], 20.00th=[ 5538], 00:38:53.301 | 30.00th=[ 6325], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7308], 00:38:53.301 | 70.00th=[ 7963], 80.00th=[ 9372], 90.00th=[18482], 95.00th=[19530], 00:38:53.301 | 99.00th=[36963], 99.50th=[51119], 99.90th=[60556], 99.95th=[60556], 00:38:53.301 | 99.99th=[60556] 00:38:53.301 bw ( KiB/s): min=23384, max=33960, per=33.33%, avg=28672.00, stdev=7478.36, samples=2 00:38:53.301 iops : min= 5846, max= 8490, avg=7168.00, stdev=1869.59, samples=2 00:38:53.301 lat (usec) : 750=0.02% 00:38:53.301 lat (msec) : 2=0.28%, 4=3.14%, 10=77.00%, 20=13.63%, 50=5.03% 00:38:53.301 lat (msec) : 100=0.89% 00:38:53.301 cpu : usr=4.31%, sys=6.90%, ctx=404, majf=0, minf=2 00:38:53.301 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:38:53.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:53.301 issued rwts: total=6878,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.301 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:53.301 job2: (groupid=0, jobs=1): err= 0: pid=4189600: Wed Nov 6 10:30:56 2024 00:38:53.301 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:38:53.301 slat (nsec): min=973, max=14718k, avg=82829.10, stdev=587136.57 00:38:53.301 clat (usec): min=3334, max=27869, avg=10814.17, stdev=2745.20 00:38:53.301 lat (usec): min=3338, max=27881, avg=10897.00, stdev=2793.35 00:38:53.301 clat percentiles (usec): 00:38:53.301 | 1.00th=[ 4621], 5.00th=[ 6063], 10.00th=[ 7504], 20.00th=[ 9372], 00:38:53.301 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10814], 60.00th=[11338], 00:38:53.301 | 70.00th=[11731], 80.00th=[12256], 90.00th=[13173], 95.00th=[14746], 00:38:53.301 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22676], 99.95th=[22938], 00:38:53.301 | 99.99th=[27919] 00:38:53.301 write: IOPS=5873, BW=22.9MiB/s (24.1MB/s)(23.0MiB/1002msec); 0 zone resets 00:38:53.301 slat (nsec): min=1701, max=16028k, avg=85527.34, stdev=590079.53 00:38:53.301 clat (usec): min=595, max=37529, avg=11103.21, stdev=2993.35 00:38:53.301 lat (usec): min=3697, max=37563, avg=11188.74, stdev=3018.55 00:38:53.301 clat percentiles (usec): 00:38:53.301 | 1.00th=[ 5669], 5.00th=[ 6783], 10.00th=[ 8848], 20.00th=[ 9503], 00:38:53.301 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:38:53.301 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12518], 95.00th=[14746], 00:38:53.301 | 99.00th=[29754], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:38:53.301 | 99.99th=[37487] 00:38:53.301 bw ( KiB/s): min=21000, max=25064, per=26.77%, avg=23032.00, stdev=2873.68, samples=2 00:38:53.301 iops : min= 5250, max= 6266, avg=5758.00, stdev=718.42, samples=2 00:38:53.301 lat (usec) : 750=0.01% 00:38:53.301 lat (msec) : 4=0.21%, 10=30.04%, 20=67.53%, 50=2.21% 00:38:53.301 cpu : usr=2.80%, sys=7.39%, ctx=398, majf=0, minf=2 00:38:53.301 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:38:53.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:53.301 issued rwts: total=5632,5885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.301 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:53.301 job3: (groupid=0, jobs=1): err= 0: pid=4189601: Wed Nov 6 10:30:56 2024 00:38:53.301 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:38:53.301 slat (nsec): min=1052, max=35867k, avg=98001.55, stdev=813391.08 00:38:53.301 clat (usec): min=4667, max=48796, avg=12352.81, stdev=4616.74 00:38:53.301 lat (usec): min=4677, max=48816, avg=12450.81, stdev=4668.99 00:38:53.301 clat percentiles (usec): 00:38:53.301 | 1.00th=[ 5997], 5.00th=[ 7898], 10.00th=[ 8356], 20.00th=[ 9503], 00:38:53.301 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11600], 60.00th=[12256], 00:38:53.301 | 70.00th=[13566], 80.00th=[14353], 90.00th=[16581], 95.00th=[18220], 00:38:53.301 | 99.00th=[36963], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:38:53.301 | 99.99th=[49021] 00:38:53.301 write: IOPS=5475, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1007msec); 0 zone resets 00:38:53.301 slat (nsec): min=1596, max=14217k, avg=82691.76, stdev=527265.39 00:38:53.301 clat (usec): min=2411, max=43684, avg=11673.55, stdev=5181.15 00:38:53.301 lat (usec): min=2996, max=43693, avg=11756.25, stdev=5214.13 00:38:53.301 clat percentiles (usec): 00:38:53.301 | 1.00th=[ 3392], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 8029], 00:38:53.301 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10421], 60.00th=[11338], 00:38:53.301 | 70.00th=[12649], 80.00th=[15926], 90.00th=[19006], 95.00th=[19792], 00:38:53.301 | 99.00th=[23200], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:38:53.301 | 99.99th=[43779] 00:38:53.302 bw ( KiB/s): min=20480, max=22616, per=25.05%, avg=21548.00, stdev=1510.38, samples=2 00:38:53.302 iops : min= 5120, max= 5654, avg=5387.00, stdev=377.60, samples=2 00:38:53.302 lat (msec) : 4=1.05%, 10=37.37%, 20=57.91%, 50=3.67% 00:38:53.302 cpu : usr=2.39%, sys=7.06%, ctx=381, majf=0, minf=1 00:38:53.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:38:53.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:53.302 issued rwts: total=5120,5514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:53.302 00:38:53.302 Run status group 0 (all jobs): 00:38:53.302 READ: bw=79.4MiB/s (83.2MB/s), 14.0MiB/s-25.7MiB/s (14.6MB/s-27.0MB/s), io=82.9MiB (86.9MB), run=1002-1044msec 00:38:53.302 WRITE: bw=84.0MiB/s (88.1MB/s), 15.1MiB/s-26.8MiB/s (15.9MB/s-28.1MB/s), io=87.7MiB (92.0MB), run=1002-1044msec 00:38:53.302 00:38:53.302 Disk stats (read/write): 00:38:53.302 nvme0n1: ios=3317/3584, merge=0/0, ticks=17505/15604, in_queue=33109, util=83.97% 00:38:53.302 nvme0n2: ios=5651/5818, merge=0/0, ticks=36921/32450, in_queue=69371, util=87.53% 00:38:53.302 nvme0n3: ios=4665/4853, merge=0/0, ticks=24854/25396, in_queue=50250, util=95.03% 00:38:53.302 nvme0n4: ios=4157/4515, merge=0/0, ticks=37085/39020, in_queue=76105, util=94.52% 00:38:53.302 10:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:38:53.302 10:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4189928 00:38:53.302 10:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:38:53.302 10:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:38:53.302 [global] 00:38:53.302 thread=1 00:38:53.302 invalidate=1 00:38:53.302 rw=read 00:38:53.302 time_based=1 00:38:53.302 runtime=10 00:38:53.302 ioengine=libaio 00:38:53.302 direct=1 00:38:53.302 bs=4096 00:38:53.302 iodepth=1 00:38:53.302 norandommap=1 00:38:53.302 numjobs=1 00:38:53.302 00:38:53.302 [job0] 00:38:53.302 filename=/dev/nvme0n1 00:38:53.302 [job1] 00:38:53.302 filename=/dev/nvme0n2 00:38:53.302 [job2] 00:38:53.302 filename=/dev/nvme0n3 00:38:53.302 [job3] 00:38:53.302 filename=/dev/nvme0n4 00:38:53.302 Could not set queue depth (nvme0n1) 00:38:53.302 Could not set queue depth (nvme0n2) 00:38:53.302 Could not set queue depth (nvme0n3) 00:38:53.302 Could not set queue depth (nvme0n4) 00:38:53.561 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:53.561 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:53.561 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:53.561 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:53.561 fio-3.35 00:38:53.561 Starting 4 threads 00:38:56.161 10:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:38:56.423 10:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:38:56.423 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:38:56.423 fio: pid=4190122, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:56.684 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:38:56.684 fio: pid=4190121, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:56.684 10:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:56.684 10:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:38:56.945 10:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:56.945 10:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:38:56.945 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=4943872, buflen=4096 00:38:56.945 fio: pid=4190119, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:56.945 10:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:56.945 10:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:38:56.945 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=712704, buflen=4096 00:38:56.945 fio: pid=4190120, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:56.945 00:38:56.945 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4190119: Wed Nov 6 10:31:00 2024 00:38:56.945 read: IOPS=405, BW=1621KiB/s (1660kB/s)(4828KiB/2979msec) 00:38:56.945 slat (usec): min=6, max=24586, avg=46.77, stdev=706.64 00:38:56.945 clat (usec): min=726, max=42903, avg=2396.22, stdev=7548.31 00:38:56.945 lat (usec): min=753, max=42934, avg=2443.01, stdev=7578.16 00:38:56.945 clat percentiles (usec): 00:38:56.945 | 1.00th=[ 766], 5.00th=[ 832], 10.00th=[ 857], 20.00th=[ 914], 00:38:56.945 | 30.00th=[ 930], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 963], 00:38:56.945 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1037], 95.00th=[ 1106], 00:38:56.945 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:38:56.945 | 99.99th=[42730] 00:38:56.945 bw ( KiB/s): min= 96, max= 4104, per=100.00%, avg=1912.00, stdev=2041.89, samples=5 00:38:56.945 iops : min= 24, max= 1026, avg=478.00, stdev=510.47, samples=5 00:38:56.945 lat (usec) : 750=0.25%, 1000=82.45% 00:38:56.945 lat (msec) : 2=13.66%, 50=3.56% 00:38:56.945 cpu : usr=0.81%, sys=1.51%, ctx=1209, majf=0, minf=1 00:38:56.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:56.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:56.945 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:56.945 issued rwts: total=1208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:56.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:56.945 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4190120: Wed Nov 6 10:31:00 2024 00:38:56.945 read: IOPS=55, BW=219KiB/s (225kB/s)(696KiB/3174msec) 00:38:56.945 slat (usec): min=18, max=13268, avg=189.08, stdev=1291.19 00:38:56.945 clat (usec): min=807, max=42082, avg=17917.84, stdev=20207.78 00:38:56.945 lat (usec): min=832, max=42108, avg=18064.27, stdev=20139.67 00:38:56.945 clat percentiles (usec): 00:38:56.945 | 1.00th=[ 816], 5.00th=[ 873], 10.00th=[ 930], 20.00th=[ 971], 00:38:56.945 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1057], 60.00th=[41157], 00:38:56.945 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:56.945 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:56.945 | 99.99th=[42206] 00:38:56.945 bw ( KiB/s): min= 96, max= 734, per=10.98%, avg=209.00, stdev=257.69, samples=6 00:38:56.945 iops : min= 24, max= 183, avg=52.17, stdev=64.22, samples=6 00:38:56.945 lat (usec) : 1000=34.86% 00:38:56.945 lat (msec) : 2=23.43%, 50=41.14% 00:38:56.945 cpu : usr=0.00%, sys=0.22%, ctx=178, majf=0, minf=1 00:38:56.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:56.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:56.945 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:56.945 issued rwts: total=175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:56.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:56.945 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4190121: Wed Nov 6 10:31:00 2024 00:38:56.945 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(268KiB/2789msec) 00:38:56.945 slat (nsec): min=26467, max=91164, avg=28061.37, stdev=7876.30 00:38:56.945 clat (usec): min=1139, max=42112, avg=41267.44, stdev=4985.37 00:38:56.945 lat (usec): min=1173, max=42145, avg=41295.51, stdev=4984.59 00:38:56.945 clat percentiles (usec): 00:38:56.945 | 1.00th=[ 1139], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:38:56.945 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:38:56.945 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:56.945 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:56.945 | 99.99th=[42206] 00:38:56.945 bw ( KiB/s): min= 96, max= 96, per=5.04%, avg=96.00, stdev= 0.00, samples=5 00:38:56.945 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:38:56.945 lat (msec) : 2=1.47%, 50=97.06% 00:38:56.945 cpu : usr=0.14%, sys=0.00%, ctx=69, majf=0, minf=2 00:38:56.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:56.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:56.945 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:56.945 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:56.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:56.945 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4190122: Wed Nov 6 10:31:00 2024 00:38:56.945 read: IOPS=24, BW=96.0KiB/s (98.3kB/s)(252KiB/2624msec) 00:38:56.945 slat (nsec): min=24739, max=35130, avg=25304.64, stdev=1270.69 00:38:56.945 clat (usec): min=937, max=42213, avg=41274.62, stdev=5168.20 00:38:56.945 lat (usec): min=973, max=42238, avg=41299.94, stdev=5166.94 00:38:56.945 clat percentiles (usec): 00:38:56.945 | 1.00th=[ 938], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:38:56.945 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:38:56.945 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:56.945 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:56.945 | 99.99th=[42206] 00:38:56.945 bw ( KiB/s): min= 96, max= 96, per=5.04%, avg=96.00, stdev= 0.00, samples=5 00:38:56.945 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:38:56.945 lat (usec) : 1000=1.56% 00:38:56.945 lat (msec) : 50=96.88% 00:38:56.945 cpu : usr=0.00%, sys=0.11%, ctx=64, majf=0, minf=2 00:38:56.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:56.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:56.945 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:56.945 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:56.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:56.945 00:38:56.945 Run status group 0 (all jobs): 00:38:56.945 READ: bw=1904KiB/s (1950kB/s), 96.0KiB/s-1621KiB/s (98.3kB/s-1660kB/s), io=6044KiB (6189kB), run=2624-3174msec 00:38:56.945 00:38:56.945 Disk stats (read/write): 00:38:56.945 nvme0n1: ios=1204/0, merge=0/0, ticks=2640/0, in_queue=2640, util=93.96% 00:38:56.945 nvme0n2: ios=172/0, merge=0/0, ticks=3038/0, in_queue=3038, util=95.04% 00:38:56.945 nvme0n3: ios=62/0, merge=0/0, ticks=2558/0, in_queue=2558, util=95.99% 00:38:56.945 nvme0n4: ios=62/0, merge=0/0, ticks=2560/0, in_queue=2560, util=96.42% 00:38:57.206 10:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:57.206 10:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:38:57.467 10:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:57.467 10:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:38:57.467 10:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:57.467 10:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:38:57.728 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:57.728 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 4189928 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:57.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:38:57.990 nvmf hotplug test: fio failed as expected 00:38:57.990 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:58.251 rmmod nvme_tcp 00:38:58.251 rmmod nvme_fabrics 00:38:58.251 rmmod nvme_keyring 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4186760 ']' 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4186760 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 4186760 ']' 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 4186760 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4186760 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4186760' 00:38:58.251 killing process with pid 4186760 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 4186760 00:38:58.251 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 4186760 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:58.512 10:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.059 10:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:01.059 00:39:01.059 real 0m28.882s 00:39:01.059 user 2m18.348s 00:39:01.059 sys 0m12.693s 00:39:01.059 10:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:01.059 10:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:01.059 ************************************ 00:39:01.059 END TEST nvmf_fio_target 00:39:01.059 ************************************ 00:39:01.059 10:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:01.059 10:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:01.059 10:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:01.059 10:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:01.059 ************************************ 00:39:01.059 START TEST nvmf_bdevio 00:39:01.059 ************************************ 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:01.059 * Looking for test storage... 00:39:01.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:01.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.059 --rc genhtml_branch_coverage=1 00:39:01.059 --rc genhtml_function_coverage=1 00:39:01.059 --rc genhtml_legend=1 00:39:01.059 --rc geninfo_all_blocks=1 00:39:01.059 --rc geninfo_unexecuted_blocks=1 00:39:01.059 00:39:01.059 ' 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:01.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.059 --rc genhtml_branch_coverage=1 00:39:01.059 --rc genhtml_function_coverage=1 00:39:01.059 --rc genhtml_legend=1 00:39:01.059 --rc geninfo_all_blocks=1 00:39:01.059 --rc geninfo_unexecuted_blocks=1 00:39:01.059 00:39:01.059 ' 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:01.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.059 --rc genhtml_branch_coverage=1 00:39:01.059 --rc genhtml_function_coverage=1 00:39:01.059 --rc genhtml_legend=1 00:39:01.059 --rc geninfo_all_blocks=1 00:39:01.059 --rc geninfo_unexecuted_blocks=1 00:39:01.059 00:39:01.059 ' 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:01.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.059 --rc genhtml_branch_coverage=1 00:39:01.059 --rc genhtml_function_coverage=1 00:39:01.059 --rc genhtml_legend=1 00:39:01.059 --rc geninfo_all_blocks=1 00:39:01.059 --rc geninfo_unexecuted_blocks=1 00:39:01.059 00:39:01.059 ' 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:01.059 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:01.060 10:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:09.199 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:09.199 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:09.199 Found net devices under 0000:31:00.0: cvl_0_0 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:09.199 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:09.200 Found net devices under 0000:31:00.1: cvl_0_1 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:09.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:09.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:39:09.200 00:39:09.200 --- 10.0.0.2 ping statistics --- 00:39:09.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:09.200 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:09.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:09.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:39:09.200 00:39:09.200 --- 10.0.0.1 ping statistics --- 00:39:09.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:09.200 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2282 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2282 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2282 ']' 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:09.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:09.200 10:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:09.200 [2024-11-06 10:31:12.663354] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:09.200 [2024-11-06 10:31:12.664341] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:09.200 [2024-11-06 10:31:12.664381] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:09.461 [2024-11-06 10:31:12.766265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:09.461 [2024-11-06 10:31:12.802082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:09.461 [2024-11-06 10:31:12.802114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:09.461 [2024-11-06 10:31:12.802122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:09.461 [2024-11-06 10:31:12.802129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:09.461 [2024-11-06 10:31:12.802135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:09.461 [2024-11-06 10:31:12.803648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:09.461 [2024-11-06 10:31:12.803796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:09.461 [2024-11-06 10:31:12.803923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:09.461 [2024-11-06 10:31:12.803924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:09.461 [2024-11-06 10:31:12.858526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:09.461 [2024-11-06 10:31:12.859877] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:09.461 [2024-11-06 10:31:12.860114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:09.461 [2024-11-06 10:31:12.860952] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:09.461 [2024-11-06 10:31:12.860995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:10.031 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:10.032 [2024-11-06 10:31:13.500663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.032 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:10.292 Malloc0 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:10.292 [2024-11-06 10:31:13.584949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:10.292 { 00:39:10.292 "params": { 00:39:10.292 "name": "Nvme$subsystem", 00:39:10.292 "trtype": "$TEST_TRANSPORT", 00:39:10.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:10.292 "adrfam": "ipv4", 00:39:10.292 "trsvcid": "$NVMF_PORT", 00:39:10.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:10.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:10.292 "hdgst": ${hdgst:-false}, 00:39:10.292 "ddgst": ${ddgst:-false} 00:39:10.292 }, 00:39:10.292 "method": "bdev_nvme_attach_controller" 00:39:10.292 } 00:39:10.292 EOF 00:39:10.292 )") 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:39:10.292 10:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:10.292 "params": { 00:39:10.292 "name": "Nvme1", 00:39:10.292 "trtype": "tcp", 00:39:10.292 "traddr": "10.0.0.2", 00:39:10.292 "adrfam": "ipv4", 00:39:10.292 "trsvcid": "4420", 00:39:10.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:10.292 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:10.292 "hdgst": false, 00:39:10.292 "ddgst": false 00:39:10.292 }, 00:39:10.292 "method": "bdev_nvme_attach_controller" 00:39:10.292 }' 00:39:10.292 [2024-11-06 10:31:13.640491] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:10.292 [2024-11-06 10:31:13.640546] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467 ] 00:39:10.292 [2024-11-06 10:31:13.718753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:10.292 [2024-11-06 10:31:13.756955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:10.292 [2024-11-06 10:31:13.757089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:10.292 [2024-11-06 10:31:13.757092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.552 I/O targets: 00:39:10.552 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:10.552 00:39:10.552 00:39:10.552 CUnit - A unit testing framework for C - Version 2.1-3 00:39:10.552 http://cunit.sourceforge.net/ 00:39:10.552 00:39:10.552 00:39:10.552 Suite: bdevio tests on: Nvme1n1 00:39:10.552 Test: blockdev write read block ...passed 00:39:10.552 Test: blockdev write zeroes read block ...passed 00:39:10.552 Test: blockdev write zeroes read no split ...passed 00:39:10.552 Test: blockdev write zeroes read split ...passed 00:39:10.552 Test: blockdev write zeroes read split partial ...passed 00:39:10.812 Test: blockdev reset ...[2024-11-06 10:31:14.053067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:10.812 [2024-11-06 10:31:14.053134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21624b0 (9): Bad file descriptor 00:39:10.812 [2024-11-06 10:31:14.185964] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:39:10.812 passed 00:39:10.812 Test: blockdev write read 8 blocks ...passed 00:39:10.812 Test: blockdev write read size > 128k ...passed 00:39:10.812 Test: blockdev write read invalid size ...passed 00:39:10.812 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:10.812 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:10.812 Test: blockdev write read max offset ...passed 00:39:11.072 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:11.072 Test: blockdev writev readv 8 blocks ...passed 00:39:11.072 Test: blockdev writev readv 30 x 1block ...passed 00:39:11.072 Test: blockdev writev readv block ...passed 00:39:11.072 Test: blockdev writev readv size > 128k ...passed 00:39:11.072 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:11.072 Test: blockdev comparev and writev ...[2024-11-06 10:31:14.371727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:11.072 [2024-11-06 10:31:14.371754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.072 [2024-11-06 10:31:14.371765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:11.072 [2024-11-06 10:31:14.371775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:11.072 [2024-11-06 10:31:14.372337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:11.072 [2024-11-06 10:31:14.372346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:11.072 [2024-11-06 10:31:14.372356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:11.072 [2024-11-06 10:31:14.372361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:11.072 [2024-11-06 10:31:14.372886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:11.072 [2024-11-06 10:31:14.372895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:11.072 [2024-11-06 10:31:14.372905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:11.072 [2024-11-06 10:31:14.372910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:11.072 [2024-11-06 10:31:14.373449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:11.072 [2024-11-06 10:31:14.373457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:11.072 [2024-11-06 10:31:14.373467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:11.072 [2024-11-06 10:31:14.373472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:11.072 passed 00:39:11.072 Test: blockdev nvme passthru rw ...passed 00:39:11.072 Test: blockdev nvme passthru vendor specific ...[2024-11-06 10:31:14.458790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:11.072 [2024-11-06 10:31:14.458801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:11.072 [2024-11-06 10:31:14.459166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:11.072 [2024-11-06 10:31:14.459174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:11.072 [2024-11-06 10:31:14.459519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:11.072 [2024-11-06 10:31:14.459528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:11.072 [2024-11-06 10:31:14.459841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:11.072 [2024-11-06 10:31:14.459849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:11.072 passed 00:39:11.072 Test: blockdev nvme admin passthru ...passed 00:39:11.072 Test: blockdev copy ...passed 00:39:11.072 00:39:11.072 Run Summary: Type Total Ran Passed Failed Inactive 00:39:11.072 suites 1 1 n/a 0 0 00:39:11.072 tests 23 23 23 0 0 00:39:11.072 asserts 152 152 152 0 n/a 00:39:11.072 00:39:11.072 Elapsed time = 1.250 seconds 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:11.333 rmmod nvme_tcp 00:39:11.333 rmmod nvme_fabrics 00:39:11.333 rmmod nvme_keyring 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2282 ']' 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2282 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2282 ']' 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2282 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2282 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2282' 00:39:11.333 killing process with pid 2282 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2282 00:39:11.333 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2282 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:11.594 10:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:13.504 10:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:13.765 00:39:13.765 real 0m12.997s 00:39:13.765 user 0m9.211s 00:39:13.765 sys 0m7.142s 00:39:13.765 10:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:13.765 10:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:13.765 ************************************ 00:39:13.765 END TEST nvmf_bdevio 00:39:13.765 ************************************ 00:39:13.765 10:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:13.765 00:39:13.765 real 5m10.639s 00:39:13.765 user 10m21.680s 00:39:13.765 sys 2m11.619s 00:39:13.765 10:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:13.765 10:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:13.765 ************************************ 00:39:13.765 END TEST nvmf_target_core_interrupt_mode 00:39:13.765 ************************************ 00:39:13.765 10:31:17 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:13.765 10:31:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:13.765 10:31:17 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:13.765 10:31:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:13.765 ************************************ 00:39:13.765 START TEST nvmf_interrupt 00:39:13.765 ************************************ 00:39:13.765 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:13.765 * Looking for test storage... 00:39:13.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:13.765 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:13.765 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:39:13.765 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:14.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.025 --rc genhtml_branch_coverage=1 00:39:14.025 --rc genhtml_function_coverage=1 00:39:14.025 --rc genhtml_legend=1 00:39:14.025 --rc geninfo_all_blocks=1 00:39:14.025 --rc geninfo_unexecuted_blocks=1 00:39:14.025 00:39:14.025 ' 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:14.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.025 --rc genhtml_branch_coverage=1 00:39:14.025 --rc genhtml_function_coverage=1 00:39:14.025 --rc genhtml_legend=1 00:39:14.025 --rc geninfo_all_blocks=1 00:39:14.025 --rc geninfo_unexecuted_blocks=1 00:39:14.025 00:39:14.025 ' 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:14.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.025 --rc genhtml_branch_coverage=1 00:39:14.025 --rc genhtml_function_coverage=1 00:39:14.025 --rc genhtml_legend=1 00:39:14.025 --rc geninfo_all_blocks=1 00:39:14.025 --rc geninfo_unexecuted_blocks=1 00:39:14.025 00:39:14.025 ' 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:14.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.025 --rc genhtml_branch_coverage=1 00:39:14.025 --rc genhtml_function_coverage=1 00:39:14.025 --rc genhtml_legend=1 00:39:14.025 --rc geninfo_all_blocks=1 00:39:14.025 --rc geninfo_unexecuted_blocks=1 00:39:14.025 00:39:14.025 ' 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.025 10:31:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:14.026 10:31:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:22.161 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:22.161 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:22.161 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:22.162 Found net devices under 0000:31:00.0: cvl_0_0 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:22.162 Found net devices under 0000:31:00.1: cvl_0_1 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:22.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:22.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:39:22.162 00:39:22.162 --- 10.0.0.2 ping statistics --- 00:39:22.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:22.162 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:22.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:22.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:39:22.162 00:39:22.162 --- 10.0.0.1 ping statistics --- 00:39:22.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:22.162 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=7538 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 7538 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 7538 ']' 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:22.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:22.162 10:31:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:22.162 [2024-11-06 10:31:25.492071] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:22.162 [2024-11-06 10:31:25.492804] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:22.162 [2024-11-06 10:31:25.492833] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:22.162 [2024-11-06 10:31:25.565801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:22.162 [2024-11-06 10:31:25.600678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:22.162 [2024-11-06 10:31:25.600713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:22.162 [2024-11-06 10:31:25.600720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:22.162 [2024-11-06 10:31:25.600727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:22.162 [2024-11-06 10:31:25.600733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:22.162 [2024-11-06 10:31:25.601897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:22.162 [2024-11-06 10:31:25.601924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.162 [2024-11-06 10:31:25.656478] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:22.162 [2024-11-06 10:31:25.656940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:22.162 [2024-11-06 10:31:25.657298] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:23.103 5000+0 records in 00:39:23.103 5000+0 records out 00:39:23.103 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0182123 s, 562 MB/s 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:23.103 AIO0 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:23.103 [2024-11-06 10:31:26.386935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.103 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:23.104 [2024-11-06 10:31:26.427246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 7538 0 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 7538 0 idle 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=7538 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 7538 -w 256 00:39:23.104 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 7538 root 20 0 128.2g 43776 32256 S 6.2 0.0 0:00.23 reactor_0' 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 7538 root 20 0 128.2g 43776 32256 S 6.2 0.0 0:00.23 reactor_0 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 7538 1 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 7538 1 idle 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=7538 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 7538 -w 256 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 7579 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 7579 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=7725 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 7538 0 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 7538 0 busy 00:39:23.365 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=7538 00:39:23.366 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:23.366 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:23.366 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:23.366 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:23.366 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:23.366 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:23.366 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:23.366 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:23.366 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 7538 -w 256 00:39:23.366 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:23.626 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 7538 root 20 0 128.2g 43776 32256 R 25.0 0.0 0:00.27 reactor_0' 00:39:23.626 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 7538 root 20 0 128.2g 43776 32256 R 25.0 0.0 0:00.27 reactor_0 00:39:23.626 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:23.626 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:23.626 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=25.0 00:39:23.626 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=25 00:39:23.626 10:31:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:23.626 10:31:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:23.626 10:31:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:39:24.568 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:39:24.568 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:24.568 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 7538 -w 256 00:39:24.568 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 7538 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:02.64 reactor_0' 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 7538 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:02.64 reactor_0 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 7538 1 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 7538 1 busy 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=7538 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:24.828 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:24.829 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:24.829 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:24.829 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 7538 -w 256 00:39:24.829 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:25.089 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 7579 root 20 0 128.2g 43776 32256 R 93.8 0.0 0:01.39 reactor_1' 00:39:25.089 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 7579 root 20 0 128.2g 43776 32256 R 93.8 0.0 0:01.39 reactor_1 00:39:25.089 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:25.089 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:25.089 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:39:25.089 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:39:25.089 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:25.089 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:25.089 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:25.089 10:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:25.089 10:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 7725 00:39:35.082 Initializing NVMe Controllers 00:39:35.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:35.082 Controller IO queue size 256, less than required. 00:39:35.082 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:35.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:35.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:35.082 Initialization complete. Launching workers. 00:39:35.082 ======================================================== 00:39:35.082 Latency(us) 00:39:35.082 Device Information : IOPS MiB/s Average min max 00:39:35.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19570.60 76.45 13086.06 3079.62 31464.39 00:39:35.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16310.00 63.71 15701.88 7107.13 19097.31 00:39:35.082 ======================================================== 00:39:35.082 Total : 35880.60 140.16 14275.11 3079.62 31464.39 00:39:35.082 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 7538 0 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 7538 0 idle 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=7538 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 7538 -w 256 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 7538 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.21 reactor_0' 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 7538 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.21 reactor_0 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 7538 1 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 7538 1 idle 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=7538 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 7538 -w 256 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 7579 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1' 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 7579 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:35.082 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:35.083 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:35.083 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:35.083 10:31:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:35.083 10:31:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:35.083 10:31:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:35.083 10:31:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:39:35.083 10:31:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:39:35.083 10:31:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:39:35.083 10:31:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:39:36.992 10:31:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:39:36.992 10:31:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:39:36.992 10:31:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 7538 0 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 7538 0 idle 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=7538 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 7538 -w 256 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 7538 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.45 reactor_0' 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 7538 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.45 reactor_0 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 7538 1 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 7538 1 idle 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=7538 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 7538 -w 256 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 7579 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.14 reactor_1' 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 7579 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.14 reactor_1 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:36.992 10:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:37.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:37.252 rmmod nvme_tcp 00:39:37.252 rmmod nvme_fabrics 00:39:37.252 rmmod nvme_keyring 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 7538 ']' 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 7538 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 7538 ']' 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 7538 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:39:37.252 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 7538 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 7538' 00:39:37.513 killing process with pid 7538 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 7538 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 7538 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:37.513 10:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:40.055 10:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:40.055 00:39:40.055 real 0m25.932s 00:39:40.055 user 0m40.554s 00:39:40.055 sys 0m9.855s 00:39:40.055 10:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:40.055 10:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:40.055 ************************************ 00:39:40.055 END TEST nvmf_interrupt 00:39:40.055 ************************************ 00:39:40.055 00:39:40.055 real 31m4.134s 00:39:40.055 user 61m33.081s 00:39:40.055 sys 10m53.669s 00:39:40.055 10:31:43 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:40.055 10:31:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:40.055 ************************************ 00:39:40.055 END TEST nvmf_tcp 00:39:40.055 ************************************ 00:39:40.055 10:31:43 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:39:40.055 10:31:43 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:40.055 10:31:43 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:40.055 10:31:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:40.055 10:31:43 -- common/autotest_common.sh@10 -- # set +x 00:39:40.055 ************************************ 00:39:40.055 START TEST spdkcli_nvmf_tcp 00:39:40.055 ************************************ 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:40.055 * Looking for test storage... 00:39:40.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:40.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.055 --rc genhtml_branch_coverage=1 00:39:40.055 --rc genhtml_function_coverage=1 00:39:40.055 --rc genhtml_legend=1 00:39:40.055 --rc geninfo_all_blocks=1 00:39:40.055 --rc geninfo_unexecuted_blocks=1 00:39:40.055 00:39:40.055 ' 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:40.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.055 --rc genhtml_branch_coverage=1 00:39:40.055 --rc genhtml_function_coverage=1 00:39:40.055 --rc genhtml_legend=1 00:39:40.055 --rc geninfo_all_blocks=1 00:39:40.055 --rc geninfo_unexecuted_blocks=1 00:39:40.055 00:39:40.055 ' 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:40.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.055 --rc genhtml_branch_coverage=1 00:39:40.055 --rc genhtml_function_coverage=1 00:39:40.055 --rc genhtml_legend=1 00:39:40.055 --rc geninfo_all_blocks=1 00:39:40.055 --rc geninfo_unexecuted_blocks=1 00:39:40.055 00:39:40.055 ' 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:40.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.055 --rc genhtml_branch_coverage=1 00:39:40.055 --rc genhtml_function_coverage=1 00:39:40.055 --rc genhtml_legend=1 00:39:40.055 --rc geninfo_all_blocks=1 00:39:40.055 --rc geninfo_unexecuted_blocks=1 00:39:40.055 00:39:40.055 ' 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:40.055 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:40.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=11153 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 11153 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 11153 ']' 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:40.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:40.056 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:40.056 [2024-11-06 10:31:43.480216] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:40.056 [2024-11-06 10:31:43.480281] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11153 ] 00:39:40.316 [2024-11-06 10:31:43.563878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:40.316 [2024-11-06 10:31:43.602876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.316 [2024-11-06 10:31:43.602891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.316 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:40.316 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:39:40.316 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:40.316 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:40.316 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:40.316 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:40.316 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:40.316 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:40.316 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:40.316 10:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:40.316 10:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:40.316 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:40.316 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:40.316 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:40.316 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:40.316 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:40.316 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:40.316 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:40.316 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:40.316 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:40.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:40.316 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:40.316 ' 00:39:42.858 [2024-11-06 10:31:46.146081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:44.239 [2024-11-06 10:31:47.354107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:46.150 [2024-11-06 10:31:49.572571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:39:48.059 [2024-11-06 10:31:51.478753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:39:49.969 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:39:49.969 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:39:49.969 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:39:49.969 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:39:49.969 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:39:49.969 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:39:49.969 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:39:49.969 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:49.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:39:49.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:39:49.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:49.969 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:49.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:39:49.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:49.969 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:49.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:39:49.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:49.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:49.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:49.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:49.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:39:49.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:39:49.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:49.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:39:49.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:49.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:39:49.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:39:49.970 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:39:49.970 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:39:49.970 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:49.970 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:49.970 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:39:49.970 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:49.970 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:49.970 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:39:49.970 10:31:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:39:49.970 10:31:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:39:50.230 10:31:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:39:50.230 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:39:50.230 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:50.230 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:50.230 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:39:50.230 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:50.230 10:31:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:50.230 10:31:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:39:50.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:39:50.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:50.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:39:50.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:39:50.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:39:50.230 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:39:50.230 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:50.230 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:39:50.230 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:39:50.230 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:39:50.230 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:39:50.230 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:39:50.230 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:39:50.230 ' 00:39:56.810 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:39:56.810 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:39:56.810 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:56.810 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:39:56.810 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:39:56.810 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:39:56.810 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:39:56.810 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:56.810 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:39:56.810 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:39:56.810 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:39:56.810 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:39:56.810 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:39:56.810 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 11153 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 11153 ']' 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 11153 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 11153 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 11153' 00:39:56.810 killing process with pid 11153 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 11153 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 11153 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 11153 ']' 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 11153 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 11153 ']' 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 11153 00:39:56.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (11153) - No such process 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 11153 is not found' 00:39:56.810 Process with pid 11153 is not found 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:56.810 00:39:56.810 real 0m16.271s 00:39:56.810 user 0m34.548s 00:39:56.810 sys 0m0.721s 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:56.810 10:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.810 ************************************ 00:39:56.810 END TEST spdkcli_nvmf_tcp 00:39:56.810 ************************************ 00:39:56.810 10:31:59 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:56.810 10:31:59 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:56.810 10:31:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:56.810 10:31:59 -- common/autotest_common.sh@10 -- # set +x 00:39:56.810 ************************************ 00:39:56.810 START TEST nvmf_identify_passthru 00:39:56.810 ************************************ 00:39:56.810 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:56.810 * Looking for test storage... 00:39:56.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:56.810 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:56.810 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:39:56.810 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:56.810 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:56.810 10:31:59 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:39:56.810 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:56.810 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:56.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.810 --rc genhtml_branch_coverage=1 00:39:56.810 --rc genhtml_function_coverage=1 00:39:56.810 --rc genhtml_legend=1 00:39:56.810 --rc geninfo_all_blocks=1 00:39:56.811 --rc geninfo_unexecuted_blocks=1 00:39:56.811 00:39:56.811 ' 00:39:56.811 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:56.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.811 --rc genhtml_branch_coverage=1 00:39:56.811 --rc genhtml_function_coverage=1 00:39:56.811 --rc genhtml_legend=1 00:39:56.811 --rc geninfo_all_blocks=1 00:39:56.811 --rc geninfo_unexecuted_blocks=1 00:39:56.811 00:39:56.811 ' 00:39:56.811 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:56.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.811 --rc genhtml_branch_coverage=1 00:39:56.811 --rc genhtml_function_coverage=1 00:39:56.811 --rc genhtml_legend=1 00:39:56.811 --rc geninfo_all_blocks=1 00:39:56.811 --rc geninfo_unexecuted_blocks=1 00:39:56.811 00:39:56.811 ' 00:39:56.811 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:56.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.811 --rc genhtml_branch_coverage=1 00:39:56.811 --rc genhtml_function_coverage=1 00:39:56.811 --rc genhtml_legend=1 00:39:56.811 --rc geninfo_all_blocks=1 00:39:56.811 --rc geninfo_unexecuted_blocks=1 00:39:56.811 00:39:56.811 ' 00:39:56.811 10:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:56.811 10:31:59 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:56.811 10:31:59 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:56.811 10:31:59 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:56.811 10:31:59 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:56.811 10:31:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.811 10:31:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.811 10:31:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.811 10:31:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:56.811 10:31:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:56.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:56.811 10:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:56.811 10:31:59 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:56.811 10:31:59 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:56.811 10:31:59 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:56.811 10:31:59 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:56.811 10:31:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.811 10:31:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.811 10:31:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.811 10:31:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:56.811 10:31:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.811 10:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:56.811 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:56.811 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:56.811 10:31:59 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:39:56.811 10:31:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:04.953 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:04.953 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:04.953 Found net devices under 0000:31:00.0: cvl_0_0 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:04.953 Found net devices under 0000:31:00.1: cvl_0_1 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:04.953 10:32:07 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:04.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:04.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:40:04.953 00:40:04.953 --- 10.0.0.2 ping statistics --- 00:40:04.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.953 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:04.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:04.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:40:04.953 00:40:04.953 --- 10.0.0.1 ping statistics --- 00:40:04.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.953 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:04.953 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:04.954 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:04.954 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:04.954 10:32:08 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:04.954 10:32:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:04.954 10:32:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:40:04.954 10:32:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:40:04.954 10:32:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:40:04.954 10:32:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:40:04.954 10:32:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:04.954 10:32:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:04.954 10:32:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:05.526 10:32:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:40:05.526 10:32:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:05.526 10:32:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:05.527 10:32:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:06.098 10:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:40:06.098 10:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:06.098 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:06.098 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:06.098 10:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:06.098 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:06.098 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:06.098 10:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=18596 00:40:06.098 10:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:06.098 10:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:06.098 10:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 18596 00:40:06.098 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 18596 ']' 00:40:06.098 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:06.098 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:06.098 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:06.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:06.098 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:06.098 10:32:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:06.098 [2024-11-06 10:32:09.401169] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:40:06.098 [2024-11-06 10:32:09.401224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:06.098 [2024-11-06 10:32:09.485681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:06.098 [2024-11-06 10:32:09.522878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:06.098 [2024-11-06 10:32:09.522912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:06.098 [2024-11-06 10:32:09.522920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:06.098 [2024-11-06 10:32:09.522927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:06.098 [2024-11-06 10:32:09.522932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:06.098 [2024-11-06 10:32:09.524618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:06.098 [2024-11-06 10:32:09.524743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:06.098 [2024-11-06 10:32:09.524918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.098 [2024-11-06 10:32:09.524918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:40:07.038 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.038 INFO: Log level set to 20 00:40:07.038 INFO: Requests: 00:40:07.038 { 00:40:07.038 "jsonrpc": "2.0", 00:40:07.038 "method": "nvmf_set_config", 00:40:07.038 "id": 1, 00:40:07.038 "params": { 00:40:07.038 "admin_cmd_passthru": { 00:40:07.038 "identify_ctrlr": true 00:40:07.038 } 00:40:07.038 } 00:40:07.038 } 00:40:07.038 00:40:07.038 INFO: response: 00:40:07.038 { 00:40:07.038 "jsonrpc": "2.0", 00:40:07.038 "id": 1, 00:40:07.038 "result": true 00:40:07.038 } 00:40:07.038 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:07.038 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.038 INFO: Setting log level to 20 00:40:07.038 INFO: Setting log level to 20 00:40:07.038 INFO: Log level set to 20 00:40:07.038 INFO: Log level set to 20 00:40:07.038 INFO: Requests: 00:40:07.038 { 00:40:07.038 "jsonrpc": "2.0", 00:40:07.038 "method": "framework_start_init", 00:40:07.038 "id": 1 00:40:07.038 } 00:40:07.038 00:40:07.038 INFO: Requests: 00:40:07.038 { 00:40:07.038 "jsonrpc": "2.0", 00:40:07.038 "method": "framework_start_init", 00:40:07.038 "id": 1 00:40:07.038 } 00:40:07.038 00:40:07.038 [2024-11-06 10:32:10.268912] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:07.038 INFO: response: 00:40:07.038 { 00:40:07.038 "jsonrpc": "2.0", 00:40:07.038 "id": 1, 00:40:07.038 "result": true 00:40:07.038 } 00:40:07.038 00:40:07.038 INFO: response: 00:40:07.038 { 00:40:07.038 "jsonrpc": "2.0", 00:40:07.038 "id": 1, 00:40:07.038 "result": true 00:40:07.038 } 00:40:07.038 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:07.038 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.038 INFO: Setting log level to 40 00:40:07.038 INFO: Setting log level to 40 00:40:07.038 INFO: Setting log level to 40 00:40:07.038 [2024-11-06 10:32:10.282232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:07.038 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.038 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:07.038 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.298 Nvme0n1 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:07.298 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:07.298 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:07.298 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.298 [2024-11-06 10:32:10.680074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:07.298 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.298 [ 00:40:07.298 { 00:40:07.298 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:07.298 "subtype": "Discovery", 00:40:07.298 "listen_addresses": [], 00:40:07.298 "allow_any_host": true, 00:40:07.298 "hosts": [] 00:40:07.298 }, 00:40:07.298 { 00:40:07.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:07.298 "subtype": "NVMe", 00:40:07.298 "listen_addresses": [ 00:40:07.298 { 00:40:07.298 "trtype": "TCP", 00:40:07.298 "adrfam": "IPv4", 00:40:07.298 "traddr": "10.0.0.2", 00:40:07.298 "trsvcid": "4420" 00:40:07.298 } 00:40:07.298 ], 00:40:07.298 "allow_any_host": true, 00:40:07.298 "hosts": [], 00:40:07.298 "serial_number": "SPDK00000000000001", 00:40:07.298 "model_number": "SPDK bdev Controller", 00:40:07.298 "max_namespaces": 1, 00:40:07.298 "min_cntlid": 1, 00:40:07.298 "max_cntlid": 65519, 00:40:07.298 "namespaces": [ 00:40:07.298 { 00:40:07.298 "nsid": 1, 00:40:07.298 "bdev_name": "Nvme0n1", 00:40:07.298 "name": "Nvme0n1", 00:40:07.298 "nguid": "3634473052605494002538450000002D", 00:40:07.298 "uuid": "36344730-5260-5494-0025-38450000002d" 00:40:07.298 } 00:40:07.298 ] 00:40:07.298 } 00:40:07.298 ] 00:40:07.298 10:32:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:07.298 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:07.298 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:07.298 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:07.558 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:40:07.558 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:07.558 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:07.558 10:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:07.817 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:40:07.817 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:40:07.817 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:40:07.817 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:07.817 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:07.817 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.817 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:07.817 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:07.817 10:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:07.817 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:07.817 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:07.818 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:07.818 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:07.818 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:07.818 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:07.818 rmmod nvme_tcp 00:40:07.818 rmmod nvme_fabrics 00:40:07.818 rmmod nvme_keyring 00:40:07.818 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:07.818 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:07.818 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:07.818 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 18596 ']' 00:40:07.818 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 18596 00:40:07.818 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 18596 ']' 00:40:07.818 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 18596 00:40:07.818 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:40:07.818 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:07.818 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 18596 00:40:07.818 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:07.818 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:07.818 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 18596' 00:40:07.818 killing process with pid 18596 00:40:07.818 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 18596 00:40:07.818 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 18596 00:40:08.077 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:08.077 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:08.077 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:08.077 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:08.077 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:40:08.077 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:08.077 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:40:08.077 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:08.077 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:08.077 10:32:11 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:08.077 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:08.077 10:32:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:10.671 10:32:13 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:10.671 00:40:10.671 real 0m14.078s 00:40:10.671 user 0m10.502s 00:40:10.671 sys 0m7.354s 00:40:10.671 10:32:13 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:10.671 10:32:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:10.671 ************************************ 00:40:10.671 END TEST nvmf_identify_passthru 00:40:10.671 ************************************ 00:40:10.671 10:32:13 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:10.671 10:32:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:10.671 10:32:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:10.671 10:32:13 -- common/autotest_common.sh@10 -- # set +x 00:40:10.671 ************************************ 00:40:10.671 START TEST nvmf_dif 00:40:10.671 ************************************ 00:40:10.671 10:32:13 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:10.671 * Looking for test storage... 00:40:10.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:10.671 10:32:13 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:10.671 10:32:13 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:40:10.671 10:32:13 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:10.671 10:32:13 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:10.671 10:32:13 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:10.671 10:32:13 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:10.671 10:32:13 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:10.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.672 --rc genhtml_branch_coverage=1 00:40:10.672 --rc genhtml_function_coverage=1 00:40:10.672 --rc genhtml_legend=1 00:40:10.672 --rc geninfo_all_blocks=1 00:40:10.672 --rc geninfo_unexecuted_blocks=1 00:40:10.672 00:40:10.672 ' 00:40:10.672 10:32:13 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.672 --rc genhtml_branch_coverage=1 00:40:10.672 --rc genhtml_function_coverage=1 00:40:10.672 --rc genhtml_legend=1 00:40:10.672 --rc geninfo_all_blocks=1 00:40:10.672 --rc geninfo_unexecuted_blocks=1 00:40:10.672 00:40:10.672 ' 00:40:10.672 10:32:13 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.672 --rc genhtml_branch_coverage=1 00:40:10.672 --rc genhtml_function_coverage=1 00:40:10.672 --rc genhtml_legend=1 00:40:10.672 --rc geninfo_all_blocks=1 00:40:10.672 --rc geninfo_unexecuted_blocks=1 00:40:10.672 00:40:10.672 ' 00:40:10.672 10:32:13 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.672 --rc genhtml_branch_coverage=1 00:40:10.672 --rc genhtml_function_coverage=1 00:40:10.672 --rc genhtml_legend=1 00:40:10.672 --rc geninfo_all_blocks=1 00:40:10.672 --rc geninfo_unexecuted_blocks=1 00:40:10.672 00:40:10.672 ' 00:40:10.672 10:32:13 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:10.672 10:32:13 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:10.672 10:32:13 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:10.672 10:32:13 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:10.672 10:32:13 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:10.672 10:32:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.672 10:32:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.672 10:32:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.672 10:32:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:10.672 10:32:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:10.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:10.672 10:32:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:10.672 10:32:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:10.672 10:32:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:10.672 10:32:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:10.672 10:32:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:10.672 10:32:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:10.672 10:32:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:10.672 10:32:13 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:10.672 10:32:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:18.890 10:32:21 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:18.890 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:18.890 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:18.890 Found net devices under 0000:31:00.0: cvl_0_0 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:18.890 Found net devices under 0000:31:00.1: cvl_0_1 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:18.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:18.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:40:18.890 00:40:18.890 --- 10.0.0.2 ping statistics --- 00:40:18.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:18.890 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:18.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:18.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:40:18.890 00:40:18.890 --- 10.0.0.1 ping statistics --- 00:40:18.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:18.890 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:18.890 10:32:22 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:23.096 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:40:23.096 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:23.096 10:32:26 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:23.096 10:32:26 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:23.096 10:32:26 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:23.096 10:32:26 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:23.096 10:32:26 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:23.096 10:32:26 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:23.096 10:32:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:23.097 10:32:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:23.097 10:32:26 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:23.097 10:32:26 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:23.097 10:32:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:23.097 10:32:26 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=25574 00:40:23.097 10:32:26 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 25574 00:40:23.097 10:32:26 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:23.097 10:32:26 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 25574 ']' 00:40:23.097 10:32:26 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:23.097 10:32:26 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:23.097 10:32:26 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:23.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:23.097 10:32:26 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:23.097 10:32:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:23.097 [2024-11-06 10:32:26.593996] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:40:23.097 [2024-11-06 10:32:26.594057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:23.357 [2024-11-06 10:32:26.676676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.357 [2024-11-06 10:32:26.711287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:23.357 [2024-11-06 10:32:26.711323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:23.357 [2024-11-06 10:32:26.711332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:23.357 [2024-11-06 10:32:26.711338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:23.357 [2024-11-06 10:32:26.711344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:23.357 [2024-11-06 10:32:26.711916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.927 10:32:27 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:23.927 10:32:27 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:40:23.927 10:32:27 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:23.927 10:32:27 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:23.927 10:32:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:23.927 10:32:27 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:23.927 10:32:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:23.927 10:32:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:23.927 10:32:27 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:23.927 10:32:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:23.927 [2024-11-06 10:32:27.423970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:24.188 10:32:27 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.188 10:32:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:24.188 10:32:27 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:24.188 10:32:27 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:24.188 10:32:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:24.188 ************************************ 00:40:24.188 START TEST fio_dif_1_default 00:40:24.188 ************************************ 00:40:24.188 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:40:24.188 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:24.188 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:24.188 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:24.189 bdev_null0 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:24.189 [2024-11-06 10:32:27.508329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:24.189 { 00:40:24.189 "params": { 00:40:24.189 "name": "Nvme$subsystem", 00:40:24.189 "trtype": "$TEST_TRANSPORT", 00:40:24.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:24.189 "adrfam": "ipv4", 00:40:24.189 "trsvcid": "$NVMF_PORT", 00:40:24.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:24.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:24.189 "hdgst": ${hdgst:-false}, 00:40:24.189 "ddgst": ${ddgst:-false} 00:40:24.189 }, 00:40:24.189 "method": "bdev_nvme_attach_controller" 00:40:24.189 } 00:40:24.189 EOF 00:40:24.189 )") 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:24.189 "params": { 00:40:24.189 "name": "Nvme0", 00:40:24.189 "trtype": "tcp", 00:40:24.189 "traddr": "10.0.0.2", 00:40:24.189 "adrfam": "ipv4", 00:40:24.189 "trsvcid": "4420", 00:40:24.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:24.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:24.189 "hdgst": false, 00:40:24.189 "ddgst": false 00:40:24.189 }, 00:40:24.189 "method": "bdev_nvme_attach_controller" 00:40:24.189 }' 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:24.189 10:32:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:24.450 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:24.450 fio-3.35 00:40:24.450 Starting 1 thread 00:40:36.675 00:40:36.675 filename0: (groupid=0, jobs=1): err= 0: pid=26107: Wed Nov 6 10:32:38 2024 00:40:36.675 read: IOPS=189, BW=759KiB/s (778kB/s)(7600KiB/10009msec) 00:40:36.675 slat (nsec): min=5391, max=62784, avg=6251.98, stdev=1887.90 00:40:36.675 clat (usec): min=468, max=45064, avg=21052.68, stdev=20174.21 00:40:36.675 lat (usec): min=474, max=45100, avg=21058.93, stdev=20174.19 00:40:36.675 clat percentiles (usec): 00:40:36.675 | 1.00th=[ 644], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 865], 00:40:36.675 | 30.00th=[ 898], 40.00th=[ 922], 50.00th=[41157], 60.00th=[41157], 00:40:36.675 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:36.675 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:40:36.675 | 99.99th=[44827] 00:40:36.675 bw ( KiB/s): min= 704, max= 768, per=99.83%, avg=758.40, stdev=21.02, samples=20 00:40:36.675 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:40:36.675 lat (usec) : 500=0.21%, 750=14.63%, 1000=34.84% 00:40:36.675 lat (msec) : 2=0.21%, 50=50.11% 00:40:36.675 cpu : usr=93.36%, sys=6.40%, ctx=21, majf=0, minf=256 00:40:36.675 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:36.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.675 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:36.675 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:36.675 00:40:36.675 Run status group 0 (all jobs): 00:40:36.675 READ: bw=759KiB/s (778kB/s), 759KiB/s-759KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10009-10009msec 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.675 00:40:36.675 real 0m11.173s 00:40:36.675 user 0m23.171s 00:40:36.675 sys 0m0.967s 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:36.675 ************************************ 00:40:36.675 END TEST fio_dif_1_default 00:40:36.675 ************************************ 00:40:36.675 10:32:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:36.675 10:32:38 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:36.675 10:32:38 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:36.675 10:32:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:36.675 ************************************ 00:40:36.675 START TEST fio_dif_1_multi_subsystems 00:40:36.675 ************************************ 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:36.675 bdev_null0 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:36.675 [2024-11-06 10:32:38.766010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:36.675 bdev_null1 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:36.675 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:36.676 { 00:40:36.676 "params": { 00:40:36.676 "name": "Nvme$subsystem", 00:40:36.676 "trtype": "$TEST_TRANSPORT", 00:40:36.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:36.676 "adrfam": "ipv4", 00:40:36.676 "trsvcid": "$NVMF_PORT", 00:40:36.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:36.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:36.676 "hdgst": ${hdgst:-false}, 00:40:36.676 "ddgst": ${ddgst:-false} 00:40:36.676 }, 00:40:36.676 "method": "bdev_nvme_attach_controller" 00:40:36.676 } 00:40:36.676 EOF 00:40:36.676 )") 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:36.676 { 00:40:36.676 "params": { 00:40:36.676 "name": "Nvme$subsystem", 00:40:36.676 "trtype": "$TEST_TRANSPORT", 00:40:36.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:36.676 "adrfam": "ipv4", 00:40:36.676 "trsvcid": "$NVMF_PORT", 00:40:36.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:36.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:36.676 "hdgst": ${hdgst:-false}, 00:40:36.676 "ddgst": ${ddgst:-false} 00:40:36.676 }, 00:40:36.676 "method": "bdev_nvme_attach_controller" 00:40:36.676 } 00:40:36.676 EOF 00:40:36.676 )") 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:36.676 "params": { 00:40:36.676 "name": "Nvme0", 00:40:36.676 "trtype": "tcp", 00:40:36.676 "traddr": "10.0.0.2", 00:40:36.676 "adrfam": "ipv4", 00:40:36.676 "trsvcid": "4420", 00:40:36.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:36.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:36.676 "hdgst": false, 00:40:36.676 "ddgst": false 00:40:36.676 }, 00:40:36.676 "method": "bdev_nvme_attach_controller" 00:40:36.676 },{ 00:40:36.676 "params": { 00:40:36.676 "name": "Nvme1", 00:40:36.676 "trtype": "tcp", 00:40:36.676 "traddr": "10.0.0.2", 00:40:36.676 "adrfam": "ipv4", 00:40:36.676 "trsvcid": "4420", 00:40:36.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:36.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:36.676 "hdgst": false, 00:40:36.676 "ddgst": false 00:40:36.676 }, 00:40:36.676 "method": "bdev_nvme_attach_controller" 00:40:36.676 }' 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:36.676 10:32:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:36.676 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:36.676 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:36.676 fio-3.35 00:40:36.676 Starting 2 threads 00:40:46.667 00:40:46.667 filename0: (groupid=0, jobs=1): err= 0: pid=28311: Wed Nov 6 10:32:50 2024 00:40:46.667 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10036msec) 00:40:46.667 slat (nsec): min=5394, max=45534, avg=6427.48, stdev=1930.98 00:40:46.667 clat (usec): min=40957, max=42464, avg=41971.82, stdev=116.61 00:40:46.667 lat (usec): min=40966, max=42493, avg=41978.25, stdev=116.51 00:40:46.667 clat percentiles (usec): 00:40:46.667 | 1.00th=[41157], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:40:46.667 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:40:46.667 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:46.667 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:46.667 | 99.99th=[42206] 00:40:46.667 bw ( KiB/s): min= 352, max= 384, per=33.48%, avg=380.80, stdev= 9.85, samples=20 00:40:46.667 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:40:46.667 lat (msec) : 50=100.00% 00:40:46.667 cpu : usr=95.69%, sys=4.11%, ctx=14, majf=0, minf=127 00:40:46.667 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:46.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:46.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:46.667 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:46.667 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:46.667 filename1: (groupid=0, jobs=1): err= 0: pid=28312: Wed Nov 6 10:32:50 2024 00:40:46.667 read: IOPS=188, BW=755KiB/s (773kB/s)(7568KiB/10028msec) 00:40:46.667 slat (nsec): min=5399, max=28261, avg=6418.46, stdev=1324.65 00:40:46.667 clat (usec): min=674, max=42642, avg=21182.74, stdev=20222.19 00:40:46.667 lat (usec): min=683, max=42670, avg=21189.16, stdev=20222.18 00:40:46.667 clat percentiles (usec): 00:40:46.667 | 1.00th=[ 832], 5.00th=[ 881], 10.00th=[ 898], 20.00th=[ 914], 00:40:46.667 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:40:46.667 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:40:46.667 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:40:46.667 | 99.99th=[42730] 00:40:46.667 bw ( KiB/s): min= 704, max= 768, per=66.51%, avg=755.20, stdev=26.27, samples=20 00:40:46.667 iops : min= 176, max= 192, avg=188.80, stdev= 6.57, samples=20 00:40:46.667 lat (usec) : 750=0.37%, 1000=48.20% 00:40:46.667 lat (msec) : 2=1.32%, 50=50.11% 00:40:46.667 cpu : usr=95.22%, sys=4.58%, ctx=14, majf=0, minf=142 00:40:46.667 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:46.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:46.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:46.667 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:46.667 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:46.667 00:40:46.667 Run status group 0 (all jobs): 00:40:46.667 READ: bw=1135KiB/s (1162kB/s), 381KiB/s-755KiB/s (390kB/s-773kB/s), io=11.1MiB (11.7MB), run=10028-10036msec 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.927 00:40:46.927 real 0m11.558s 00:40:46.927 user 0m35.527s 00:40:46.927 sys 0m1.240s 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:46.927 10:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:46.927 ************************************ 00:40:46.927 END TEST fio_dif_1_multi_subsystems 00:40:46.927 ************************************ 00:40:46.927 10:32:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:46.927 10:32:50 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:46.927 10:32:50 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:46.927 10:32:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:46.927 ************************************ 00:40:46.927 START TEST fio_dif_rand_params 00:40:46.927 ************************************ 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:46.927 bdev_null0 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:46.927 [2024-11-06 10:32:50.401595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:46.927 10:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:46.927 { 00:40:46.927 "params": { 00:40:46.927 "name": "Nvme$subsystem", 00:40:46.927 "trtype": "$TEST_TRANSPORT", 00:40:46.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:46.928 "adrfam": "ipv4", 00:40:46.928 "trsvcid": "$NVMF_PORT", 00:40:46.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:46.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:46.928 "hdgst": ${hdgst:-false}, 00:40:46.928 "ddgst": ${ddgst:-false} 00:40:46.928 }, 00:40:46.928 "method": "bdev_nvme_attach_controller" 00:40:46.928 } 00:40:46.928 EOF 00:40:46.928 )") 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:46.928 10:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:46.928 "params": { 00:40:46.928 "name": "Nvme0", 00:40:46.928 "trtype": "tcp", 00:40:46.928 "traddr": "10.0.0.2", 00:40:46.928 "adrfam": "ipv4", 00:40:46.928 "trsvcid": "4420", 00:40:46.928 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:46.928 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:46.928 "hdgst": false, 00:40:46.928 "ddgst": false 00:40:46.928 }, 00:40:46.928 "method": "bdev_nvme_attach_controller" 00:40:46.928 }' 00:40:47.188 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:47.188 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:47.188 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:47.188 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:47.188 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:47.188 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:47.188 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:47.188 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:47.188 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:47.188 10:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:47.463 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:47.463 ... 00:40:47.463 fio-3.35 00:40:47.463 Starting 3 threads 00:40:54.051 00:40:54.051 filename0: (groupid=0, jobs=1): err= 0: pid=30762: Wed Nov 6 10:32:56 2024 00:40:54.051 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(146MiB/5006msec) 00:40:54.051 slat (nsec): min=5667, max=30997, avg=8027.94, stdev=2438.57 00:40:54.051 clat (usec): min=6923, max=55747, avg=12890.42, stdev=4971.33 00:40:54.051 lat (usec): min=6929, max=55755, avg=12898.45, stdev=4971.58 00:40:54.051 clat percentiles (usec): 00:40:54.051 | 1.00th=[ 7635], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10945], 00:40:54.051 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:40:54.051 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14222], 95.00th=[14877], 00:40:54.051 | 99.00th=[48497], 99.50th=[51119], 99.90th=[54789], 99.95th=[55837], 00:40:54.051 | 99.99th=[55837] 00:40:54.051 bw ( KiB/s): min=23296, max=32000, per=32.26%, avg=29721.60, stdev=2639.68, samples=10 00:40:54.051 iops : min= 182, max= 250, avg=232.20, stdev=20.62, samples=10 00:40:54.052 lat (msec) : 10=13.23%, 20=85.22%, 50=0.77%, 100=0.77% 00:40:54.052 cpu : usr=94.75%, sys=5.00%, ctx=9, majf=0, minf=109 00:40:54.052 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.052 issued rwts: total=1164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.052 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:54.052 filename0: (groupid=0, jobs=1): err= 0: pid=30763: Wed Nov 6 10:32:56 2024 00:40:54.052 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(144MiB/5044msec) 00:40:54.052 slat (nsec): min=5414, max=33530, avg=8589.20, stdev=2306.40 00:40:54.052 clat (usec): min=7247, max=53434, avg=13097.62, stdev=4810.48 00:40:54.052 lat (usec): min=7263, max=53442, avg=13106.21, stdev=4810.54 00:40:54.052 clat percentiles (usec): 00:40:54.052 | 1.00th=[ 7832], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[11076], 00:40:54.052 | 30.00th=[11994], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:40:54.052 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14615], 95.00th=[15139], 00:40:54.052 | 99.00th=[47973], 99.50th=[50594], 99.90th=[51643], 99.95th=[53216], 00:40:54.052 | 99.99th=[53216] 00:40:54.052 bw ( KiB/s): min=24576, max=31488, per=31.93%, avg=29414.40, stdev=1962.48, samples=10 00:40:54.052 iops : min= 192, max= 246, avg=229.80, stdev=15.33, samples=10 00:40:54.052 lat (msec) : 10=11.12%, 20=87.40%, 50=0.87%, 100=0.61% 00:40:54.052 cpu : usr=95.20%, sys=4.54%, ctx=8, majf=0, minf=94 00:40:54.052 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.052 issued rwts: total=1151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.052 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:54.052 filename0: (groupid=0, jobs=1): err= 0: pid=30764: Wed Nov 6 10:32:56 2024 00:40:54.052 read: IOPS=262, BW=32.8MiB/s (34.4MB/s)(164MiB/5013msec) 00:40:54.052 slat (nsec): min=5432, max=31624, avg=8446.19, stdev=2283.14 00:40:54.052 clat (usec): min=6092, max=52669, avg=11425.07, stdev=6554.23 00:40:54.052 lat (usec): min=6101, max=52679, avg=11433.52, stdev=6554.18 00:40:54.052 clat percentiles (usec): 00:40:54.052 | 1.00th=[ 7177], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9372], 00:40:54.052 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:40:54.052 | 70.00th=[11076], 80.00th=[11469], 90.00th=[12125], 95.00th=[12911], 00:40:54.052 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51643], 99.95th=[52691], 00:40:54.052 | 99.99th=[52691] 00:40:54.052 bw ( KiB/s): min=27904, max=39424, per=36.46%, avg=33587.20, stdev=3120.91, samples=10 00:40:54.052 iops : min= 218, max= 308, avg=262.40, stdev=24.38, samples=10 00:40:54.052 lat (msec) : 10=37.11%, 20=60.15%, 50=1.52%, 100=1.22% 00:40:54.052 cpu : usr=94.21%, sys=5.53%, ctx=6, majf=0, minf=76 00:40:54.052 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.052 issued rwts: total=1315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.052 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:54.052 00:40:54.052 Run status group 0 (all jobs): 00:40:54.052 READ: bw=90.0MiB/s (94.3MB/s), 28.5MiB/s-32.8MiB/s (29.9MB/s-34.4MB/s), io=454MiB (476MB), run=5006-5044msec 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 bdev_null0 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 [2024-11-06 10:32:56.683554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 bdev_null1 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 bdev_null2 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.052 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:54.053 { 00:40:54.053 "params": { 00:40:54.053 "name": "Nvme$subsystem", 00:40:54.053 "trtype": "$TEST_TRANSPORT", 00:40:54.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:54.053 "adrfam": "ipv4", 00:40:54.053 "trsvcid": "$NVMF_PORT", 00:40:54.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:54.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:54.053 "hdgst": ${hdgst:-false}, 00:40:54.053 "ddgst": ${ddgst:-false} 00:40:54.053 }, 00:40:54.053 "method": "bdev_nvme_attach_controller" 00:40:54.053 } 00:40:54.053 EOF 00:40:54.053 )") 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:54.053 { 00:40:54.053 "params": { 00:40:54.053 "name": "Nvme$subsystem", 00:40:54.053 "trtype": "$TEST_TRANSPORT", 00:40:54.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:54.053 "adrfam": "ipv4", 00:40:54.053 "trsvcid": "$NVMF_PORT", 00:40:54.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:54.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:54.053 "hdgst": ${hdgst:-false}, 00:40:54.053 "ddgst": ${ddgst:-false} 00:40:54.053 }, 00:40:54.053 "method": "bdev_nvme_attach_controller" 00:40:54.053 } 00:40:54.053 EOF 00:40:54.053 )") 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:54.053 { 00:40:54.053 "params": { 00:40:54.053 "name": "Nvme$subsystem", 00:40:54.053 "trtype": "$TEST_TRANSPORT", 00:40:54.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:54.053 "adrfam": "ipv4", 00:40:54.053 "trsvcid": "$NVMF_PORT", 00:40:54.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:54.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:54.053 "hdgst": ${hdgst:-false}, 00:40:54.053 "ddgst": ${ddgst:-false} 00:40:54.053 }, 00:40:54.053 "method": "bdev_nvme_attach_controller" 00:40:54.053 } 00:40:54.053 EOF 00:40:54.053 )") 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:54.053 "params": { 00:40:54.053 "name": "Nvme0", 00:40:54.053 "trtype": "tcp", 00:40:54.053 "traddr": "10.0.0.2", 00:40:54.053 "adrfam": "ipv4", 00:40:54.053 "trsvcid": "4420", 00:40:54.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:54.053 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:54.053 "hdgst": false, 00:40:54.053 "ddgst": false 00:40:54.053 }, 00:40:54.053 "method": "bdev_nvme_attach_controller" 00:40:54.053 },{ 00:40:54.053 "params": { 00:40:54.053 "name": "Nvme1", 00:40:54.053 "trtype": "tcp", 00:40:54.053 "traddr": "10.0.0.2", 00:40:54.053 "adrfam": "ipv4", 00:40:54.053 "trsvcid": "4420", 00:40:54.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:54.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:54.053 "hdgst": false, 00:40:54.053 "ddgst": false 00:40:54.053 }, 00:40:54.053 "method": "bdev_nvme_attach_controller" 00:40:54.053 },{ 00:40:54.053 "params": { 00:40:54.053 "name": "Nvme2", 00:40:54.053 "trtype": "tcp", 00:40:54.053 "traddr": "10.0.0.2", 00:40:54.053 "adrfam": "ipv4", 00:40:54.053 "trsvcid": "4420", 00:40:54.053 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:54.053 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:54.053 "hdgst": false, 00:40:54.053 "ddgst": false 00:40:54.053 }, 00:40:54.053 "method": "bdev_nvme_attach_controller" 00:40:54.053 }' 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:54.053 10:32:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:54.053 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:54.053 ... 00:40:54.053 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:54.053 ... 00:40:54.053 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:54.053 ... 00:40:54.053 fio-3.35 00:40:54.053 Starting 24 threads 00:41:06.284 00:41:06.284 filename0: (groupid=0, jobs=1): err= 0: pid=32046: Wed Nov 6 10:33:08 2024 00:41:06.284 read: IOPS=509, BW=2037KiB/s (2086kB/s)(19.9MiB/10003msec) 00:41:06.284 slat (nsec): min=5589, max=68629, avg=11469.36, stdev=8813.83 00:41:06.284 clat (usec): min=1340, max=47571, avg=31322.49, stdev=6478.82 00:41:06.284 lat (usec): min=1350, max=47592, avg=31333.96, stdev=6478.06 00:41:06.284 clat percentiles (usec): 00:41:06.284 | 1.00th=[ 1713], 5.00th=[20579], 10.00th=[31589], 20.00th=[32375], 00:41:06.284 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:41:06.284 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:06.284 | 99.00th=[35914], 99.50th=[35914], 99.90th=[38536], 99.95th=[43779], 00:41:06.284 | 99.99th=[47449] 00:41:06.284 bw ( KiB/s): min= 1792, max= 3584, per=4.31%, avg=2037.05, stdev=381.62, samples=19 00:41:06.284 iops : min= 448, max= 896, avg=509.26, stdev=95.41, samples=19 00:41:06.284 lat (msec) : 2=2.65%, 4=0.82%, 10=0.61%, 20=0.84%, 50=95.07% 00:41:06.284 cpu : usr=98.99%, sys=0.73%, ctx=14, majf=0, minf=73 00:41:06.284 IO depths : 1=5.9%, 2=11.8%, 4=24.1%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:41:06.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 issued rwts: total=5094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.284 filename0: (groupid=0, jobs=1): err= 0: pid=32047: Wed Nov 6 10:33:08 2024 00:41:06.284 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.4MiB/10018msec) 00:41:06.284 slat (nsec): min=5542, max=71694, avg=15307.90, stdev=12592.87 00:41:06.284 clat (usec): min=15277, max=53206, avg=32207.70, stdev=4868.10 00:41:06.284 lat (usec): min=15284, max=53214, avg=32223.00, stdev=4869.64 00:41:06.284 clat percentiles (usec): 00:41:06.284 | 1.00th=[21103], 5.00th=[22938], 10.00th=[25035], 20.00th=[29754], 00:41:06.284 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:41:06.284 | 70.00th=[33424], 80.00th=[33817], 90.00th=[35390], 95.00th=[41157], 00:41:06.284 | 99.00th=[48497], 99.50th=[49021], 99.90th=[52167], 99.95th=[52167], 00:41:06.284 | 99.99th=[53216] 00:41:06.284 bw ( KiB/s): min= 1792, max= 2160, per=4.18%, avg=1976.95, stdev=104.62, samples=20 00:41:06.284 iops : min= 448, max= 540, avg=494.20, stdev=26.16, samples=20 00:41:06.284 lat (msec) : 20=0.28%, 50=99.54%, 100=0.18% 00:41:06.284 cpu : usr=99.10%, sys=0.61%, ctx=14, majf=0, minf=48 00:41:06.284 IO depths : 1=3.5%, 2=7.1%, 4=16.5%, 8=63.2%, 16=9.6%, 32=0.0%, >=64=0.0% 00:41:06.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 complete : 0=0.0%, 4=91.9%, 8=3.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 issued rwts: total=4958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.284 filename0: (groupid=0, jobs=1): err= 0: pid=32048: Wed Nov 6 10:33:08 2024 00:41:06.284 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10012msec) 00:41:06.284 slat (nsec): min=5552, max=73192, avg=15761.01, stdev=10642.67 00:41:06.284 clat (usec): min=12547, max=35983, avg=32784.25, stdev=2185.19 00:41:06.284 lat (usec): min=12559, max=35993, avg=32800.01, stdev=2184.65 00:41:06.284 clat percentiles (usec): 00:41:06.284 | 1.00th=[20579], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:41:06.284 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:41:06.284 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:06.284 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:41:06.284 | 99.99th=[35914] 00:41:06.284 bw ( KiB/s): min= 1920, max= 2048, per=4.10%, avg=1939.20, stdev=46.89, samples=20 00:41:06.284 iops : min= 480, max= 512, avg=484.80, stdev=11.72, samples=20 00:41:06.284 lat (msec) : 20=0.99%, 50=99.01% 00:41:06.284 cpu : usr=99.13%, sys=0.57%, ctx=15, majf=0, minf=27 00:41:06.284 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:06.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.284 filename0: (groupid=0, jobs=1): err= 0: pid=32049: Wed Nov 6 10:33:08 2024 00:41:06.284 read: IOPS=482, BW=1931KiB/s (1978kB/s)(18.9MiB/10004msec) 00:41:06.284 slat (nsec): min=5548, max=62597, avg=17050.16, stdev=9975.70 00:41:06.284 clat (usec): min=7139, max=59923, avg=32992.38, stdev=2491.35 00:41:06.284 lat (usec): min=7144, max=59942, avg=33009.43, stdev=2491.27 00:41:06.284 clat percentiles (usec): 00:41:06.284 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:41:06.284 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:41:06.284 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:06.284 | 99.00th=[35914], 99.50th=[37487], 99.90th=[60031], 99.95th=[60031], 00:41:06.284 | 99.99th=[60031] 00:41:06.284 bw ( KiB/s): min= 1667, max= 2048, per=4.06%, avg=1920.16, stdev=73.32, samples=19 00:41:06.284 iops : min= 416, max= 512, avg=480.00, stdev=18.48, samples=19 00:41:06.284 lat (msec) : 10=0.29%, 20=0.33%, 50=99.05%, 100=0.33% 00:41:06.284 cpu : usr=98.80%, sys=0.79%, ctx=40, majf=0, minf=59 00:41:06.284 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:41:06.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 issued rwts: total=4830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.284 filename0: (groupid=0, jobs=1): err= 0: pid=32050: Wed Nov 6 10:33:08 2024 00:41:06.284 read: IOPS=488, BW=1953KiB/s (2000kB/s)(19.1MiB/10005msec) 00:41:06.284 slat (nsec): min=5496, max=69421, avg=15944.51, stdev=10552.52 00:41:06.284 clat (usec): min=11547, max=78960, avg=32622.96, stdev=3698.99 00:41:06.284 lat (usec): min=11553, max=78978, avg=32638.91, stdev=3699.72 00:41:06.284 clat percentiles (usec): 00:41:06.284 | 1.00th=[21627], 5.00th=[25035], 10.00th=[31327], 20.00th=[32113], 00:41:06.284 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[33162], 00:41:06.284 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:41:06.284 | 99.00th=[45351], 99.50th=[49021], 99.90th=[60031], 99.95th=[60031], 00:41:06.284 | 99.99th=[79168] 00:41:06.284 bw ( KiB/s): min= 1760, max= 2112, per=4.11%, avg=1943.58, stdev=80.42, samples=19 00:41:06.284 iops : min= 440, max= 528, avg=485.89, stdev=20.10, samples=19 00:41:06.284 lat (msec) : 20=0.33%, 50=99.18%, 100=0.49% 00:41:06.284 cpu : usr=98.52%, sys=1.04%, ctx=39, majf=0, minf=38 00:41:06.284 IO depths : 1=4.0%, 2=8.6%, 4=19.3%, 8=58.7%, 16=9.4%, 32=0.0%, >=64=0.0% 00:41:06.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 complete : 0=0.0%, 4=92.7%, 8=2.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.284 filename0: (groupid=0, jobs=1): err= 0: pid=32051: Wed Nov 6 10:33:08 2024 00:41:06.284 read: IOPS=482, BW=1931KiB/s (1978kB/s)(18.9MiB/10008msec) 00:41:06.284 slat (nsec): min=5593, max=68476, avg=17970.83, stdev=11383.46 00:41:06.284 clat (usec): min=16264, max=46286, avg=32996.06, stdev=1535.86 00:41:06.284 lat (usec): min=16279, max=46303, avg=33014.03, stdev=1535.17 00:41:06.284 clat percentiles (usec): 00:41:06.284 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:41:06.284 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[33162], 00:41:06.284 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:41:06.284 | 99.00th=[35914], 99.50th=[35914], 99.90th=[46400], 99.95th=[46400], 00:41:06.284 | 99.99th=[46400] 00:41:06.284 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1920.16, stdev=59.99, samples=19 00:41:06.284 iops : min= 448, max= 512, avg=480.00, stdev=15.08, samples=19 00:41:06.284 lat (msec) : 20=0.33%, 50=99.67% 00:41:06.284 cpu : usr=99.03%, sys=0.68%, ctx=13, majf=0, minf=41 00:41:06.284 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:06.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.284 filename0: (groupid=0, jobs=1): err= 0: pid=32053: Wed Nov 6 10:33:08 2024 00:41:06.284 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10018msec) 00:41:06.284 slat (nsec): min=5551, max=68295, avg=12925.03, stdev=9611.18 00:41:06.284 clat (usec): min=14020, max=55156, avg=32704.33, stdev=4216.82 00:41:06.284 lat (usec): min=14038, max=55190, avg=32717.26, stdev=4217.69 00:41:06.284 clat percentiles (usec): 00:41:06.284 | 1.00th=[20317], 5.00th=[23987], 10.00th=[27919], 20.00th=[32113], 00:41:06.284 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[33162], 00:41:06.284 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[40633], 00:41:06.284 | 99.00th=[45876], 99.50th=[51643], 99.90th=[54789], 99.95th=[55313], 00:41:06.284 | 99.99th=[55313] 00:41:06.284 bw ( KiB/s): min= 1792, max= 2224, per=4.12%, avg=1948.00, stdev=95.28, samples=20 00:41:06.284 iops : min= 448, max= 556, avg=487.00, stdev=23.82, samples=20 00:41:06.284 lat (msec) : 20=0.65%, 50=98.57%, 100=0.78% 00:41:06.284 cpu : usr=98.35%, sys=1.10%, ctx=276, majf=0, minf=76 00:41:06.284 IO depths : 1=3.8%, 2=8.5%, 4=20.1%, 8=58.8%, 16=8.8%, 32=0.0%, >=64=0.0% 00:41:06.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.284 filename0: (groupid=0, jobs=1): err= 0: pid=32054: Wed Nov 6 10:33:08 2024 00:41:06.284 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10003msec) 00:41:06.284 slat (nsec): min=5475, max=71343, avg=16541.65, stdev=12683.58 00:41:06.284 clat (usec): min=7093, max=59165, avg=32440.26, stdev=4089.94 00:41:06.284 lat (usec): min=7099, max=59192, avg=32456.80, stdev=4090.65 00:41:06.284 clat percentiles (usec): 00:41:06.284 | 1.00th=[21103], 5.00th=[24773], 10.00th=[27395], 20.00th=[32113], 00:41:06.284 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[33162], 00:41:06.284 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[38011], 00:41:06.284 | 99.00th=[42206], 99.50th=[47449], 99.90th=[58983], 99.95th=[58983], 00:41:06.284 | 99.99th=[58983] 00:41:06.284 bw ( KiB/s): min= 1792, max= 2160, per=4.13%, avg=1953.68, stdev=75.22, samples=19 00:41:06.284 iops : min= 448, max= 540, avg=488.42, stdev=18.80, samples=19 00:41:06.284 lat (msec) : 10=0.18%, 20=0.57%, 50=98.92%, 100=0.33% 00:41:06.284 cpu : usr=98.86%, sys=0.85%, ctx=12, majf=0, minf=70 00:41:06.284 IO depths : 1=3.1%, 2=6.4%, 4=14.3%, 8=65.1%, 16=11.1%, 32=0.0%, >=64=0.0% 00:41:06.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 complete : 0=0.0%, 4=91.5%, 8=4.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 issued rwts: total=4914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.284 filename1: (groupid=0, jobs=1): err= 0: pid=32055: Wed Nov 6 10:33:08 2024 00:41:06.284 read: IOPS=482, BW=1931KiB/s (1978kB/s)(18.9MiB/10008msec) 00:41:06.284 slat (nsec): min=5717, max=74991, avg=23450.28, stdev=13387.62 00:41:06.284 clat (usec): min=16219, max=46665, avg=32914.34, stdev=1551.29 00:41:06.284 lat (usec): min=16237, max=46681, avg=32937.79, stdev=1551.10 00:41:06.284 clat percentiles (usec): 00:41:06.284 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:41:06.284 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:41:06.284 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:41:06.284 | 99.00th=[35914], 99.50th=[35914], 99.90th=[46400], 99.95th=[46400], 00:41:06.284 | 99.99th=[46924] 00:41:06.284 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1920.00, stdev=60.34, samples=19 00:41:06.284 iops : min= 448, max= 512, avg=480.00, stdev=15.08, samples=19 00:41:06.284 lat (msec) : 20=0.33%, 50=99.67% 00:41:06.284 cpu : usr=98.84%, sys=0.80%, ctx=61, majf=0, minf=37 00:41:06.284 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:06.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.284 filename1: (groupid=0, jobs=1): err= 0: pid=32056: Wed Nov 6 10:33:08 2024 00:41:06.284 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10026msec) 00:41:06.284 slat (nsec): min=5553, max=75425, avg=24937.26, stdev=14642.90 00:41:06.284 clat (usec): min=13032, max=50407, avg=32691.65, stdev=2007.41 00:41:06.284 lat (usec): min=13042, max=50416, avg=32716.59, stdev=2008.66 00:41:06.284 clat percentiles (usec): 00:41:06.284 | 1.00th=[22414], 5.00th=[31589], 10.00th=[32113], 20.00th=[32113], 00:41:06.284 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:41:06.284 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:41:06.284 | 99.00th=[35390], 99.50th=[35914], 99.90th=[50070], 99.95th=[50594], 00:41:06.284 | 99.99th=[50594] 00:41:06.284 bw ( KiB/s): min= 1792, max= 2096, per=4.10%, avg=1941.60, stdev=67.74, samples=20 00:41:06.284 iops : min= 448, max= 524, avg=485.40, stdev=16.93, samples=20 00:41:06.284 lat (msec) : 20=0.33%, 50=99.51%, 100=0.16% 00:41:06.284 cpu : usr=98.83%, sys=0.86%, ctx=13, majf=0, minf=68 00:41:06.284 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:06.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.284 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.284 filename1: (groupid=0, jobs=1): err= 0: pid=32057: Wed Nov 6 10:33:08 2024 00:41:06.284 read: IOPS=482, BW=1931KiB/s (1978kB/s)(18.9MiB/10008msec) 00:41:06.284 slat (nsec): min=5607, max=75600, avg=22350.65, stdev=14442.24 00:41:06.284 clat (usec): min=20028, max=42428, avg=32949.93, stdev=1125.95 00:41:06.284 lat (usec): min=20037, max=42451, avg=32972.28, stdev=1124.78 00:41:06.284 clat percentiles (usec): 00:41:06.284 | 1.00th=[31065], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:41:06.285 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[33162], 00:41:06.285 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:41:06.285 | 99.00th=[35914], 99.50th=[35914], 99.90th=[37487], 99.95th=[38011], 00:41:06.285 | 99.99th=[42206] 00:41:06.285 bw ( KiB/s): min= 1795, max= 2048, per=4.07%, avg=1926.89, stdev=51.36, samples=19 00:41:06.285 iops : min= 448, max= 512, avg=481.68, stdev=12.95, samples=19 00:41:06.285 lat (msec) : 50=100.00% 00:41:06.285 cpu : usr=99.09%, sys=0.61%, ctx=13, majf=0, minf=46 00:41:06.285 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:06.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.285 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.285 filename1: (groupid=0, jobs=1): err= 0: pid=32058: Wed Nov 6 10:33:08 2024 00:41:06.285 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10009msec) 00:41:06.285 slat (nsec): min=5559, max=72520, avg=14956.16, stdev=12113.76 00:41:06.285 clat (usec): min=24396, max=39452, avg=33025.16, stdev=1113.93 00:41:06.285 lat (usec): min=24404, max=39478, avg=33040.12, stdev=1112.87 00:41:06.285 clat percentiles (usec): 00:41:06.285 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:41:06.285 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:41:06.285 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:41:06.285 | 99.00th=[35914], 99.50th=[35914], 99.90th=[39584], 99.95th=[39584], 00:41:06.285 | 99.99th=[39584] 00:41:06.285 bw ( KiB/s): min= 1792, max= 2048, per=4.07%, avg=1926.74, stdev=51.80, samples=19 00:41:06.285 iops : min= 448, max= 512, avg=481.68, stdev=12.95, samples=19 00:41:06.285 lat (msec) : 50=100.00% 00:41:06.285 cpu : usr=99.02%, sys=0.68%, ctx=13, majf=0, minf=37 00:41:06.285 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:06.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.285 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.285 filename1: (groupid=0, jobs=1): err= 0: pid=32059: Wed Nov 6 10:33:08 2024 00:41:06.285 read: IOPS=484, BW=1938KiB/s (1985kB/s)(18.9MiB/10004msec) 00:41:06.285 slat (nsec): min=5548, max=63180, avg=8709.48, stdev=5971.70 00:41:06.285 clat (usec): min=10406, max=41460, avg=32938.84, stdev=1799.38 00:41:06.285 lat (usec): min=10415, max=41466, avg=32947.55, stdev=1798.85 00:41:06.285 clat percentiles (usec): 00:41:06.285 | 1.00th=[24511], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:41:06.285 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:41:06.285 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:06.285 | 99.00th=[35390], 99.50th=[35914], 99.90th=[41157], 99.95th=[41157], 00:41:06.285 | 99.99th=[41681] 00:41:06.285 bw ( KiB/s): min= 1792, max= 2048, per=4.09%, avg=1933.47, stdev=58.73, samples=19 00:41:06.285 iops : min= 448, max= 512, avg=483.37, stdev=14.68, samples=19 00:41:06.285 lat (msec) : 20=0.62%, 50=99.38% 00:41:06.285 cpu : usr=99.03%, sys=0.68%, ctx=14, majf=0, minf=68 00:41:06.285 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:06.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.285 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.285 filename1: (groupid=0, jobs=1): err= 0: pid=32060: Wed Nov 6 10:33:08 2024 00:41:06.285 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10016msec) 00:41:06.285 slat (nsec): min=5553, max=68296, avg=15087.86, stdev=12144.52 00:41:06.285 clat (usec): min=16305, max=49116, avg=32682.07, stdev=2906.18 00:41:06.285 lat (usec): min=16311, max=49124, avg=32697.16, stdev=2906.69 00:41:06.285 clat percentiles (usec): 00:41:06.285 | 1.00th=[21103], 5.00th=[26870], 10.00th=[31851], 20.00th=[32375], 00:41:06.285 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[33162], 00:41:06.285 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:06.285 | 99.00th=[42730], 99.50th=[44827], 99.90th=[49021], 99.95th=[49021], 00:41:06.285 | 99.99th=[49021] 00:41:06.285 bw ( KiB/s): min= 1792, max= 2128, per=4.10%, avg=1941.89, stdev=80.92, samples=19 00:41:06.285 iops : min= 448, max= 532, avg=485.47, stdev=20.23, samples=19 00:41:06.285 lat (msec) : 20=0.33%, 50=99.67% 00:41:06.285 cpu : usr=98.62%, sys=0.93%, ctx=61, majf=0, minf=68 00:41:06.285 IO depths : 1=5.3%, 2=11.0%, 4=23.4%, 8=53.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:41:06.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 issued rwts: total=4884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.285 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.285 filename1: (groupid=0, jobs=1): err= 0: pid=32061: Wed Nov 6 10:33:08 2024 00:41:06.285 read: IOPS=483, BW=1932KiB/s (1978kB/s)(18.9MiB/10004msec) 00:41:06.285 slat (nsec): min=5466, max=75388, avg=22517.34, stdev=13792.29 00:41:06.285 clat (usec): min=4082, max=71120, avg=32919.70, stdev=2727.49 00:41:06.285 lat (usec): min=4088, max=71139, avg=32942.22, stdev=2727.36 00:41:06.285 clat percentiles (usec): 00:41:06.285 | 1.00th=[26346], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:41:06.285 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:41:06.285 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:06.285 | 99.00th=[35914], 99.50th=[40633], 99.90th=[59507], 99.95th=[59507], 00:41:06.285 | 99.99th=[70779] 00:41:06.285 bw ( KiB/s): min= 1667, max= 2048, per=4.06%, avg=1920.16, stdev=84.83, samples=19 00:41:06.285 iops : min= 416, max= 512, avg=480.00, stdev=21.33, samples=19 00:41:06.285 lat (msec) : 10=0.33%, 20=0.33%, 50=99.01%, 100=0.33% 00:41:06.285 cpu : usr=98.70%, sys=0.86%, ctx=143, majf=0, minf=55 00:41:06.285 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:06.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.285 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.285 filename1: (groupid=0, jobs=1): err= 0: pid=32063: Wed Nov 6 10:33:08 2024 00:41:06.285 read: IOPS=586, BW=2346KiB/s (2403kB/s)(22.9MiB/10010msec) 00:41:06.285 slat (nsec): min=5545, max=59317, avg=7789.51, stdev=3891.27 00:41:06.285 clat (usec): min=6057, max=35032, avg=27208.13, stdev=5560.53 00:41:06.285 lat (usec): min=6069, max=35038, avg=27215.92, stdev=5561.04 00:41:06.285 clat percentiles (usec): 00:41:06.285 | 1.00th=[16909], 5.00th=[19530], 10.00th=[20317], 20.00th=[21627], 00:41:06.285 | 30.00th=[22676], 40.00th=[24249], 50.00th=[26608], 60.00th=[32375], 00:41:06.285 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:41:06.285 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:41:06.285 | 99.99th=[34866] 00:41:06.285 bw ( KiB/s): min= 1920, max= 2688, per=4.95%, avg=2342.40, stdev=220.14, samples=20 00:41:06.285 iops : min= 480, max= 672, avg=585.60, stdev=55.04, samples=20 00:41:06.285 lat (msec) : 10=0.54%, 20=6.59%, 50=92.86% 00:41:06.285 cpu : usr=98.64%, sys=0.91%, ctx=74, majf=0, minf=61 00:41:06.285 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:06.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.285 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.285 filename2: (groupid=0, jobs=1): err= 0: pid=32064: Wed Nov 6 10:33:08 2024 00:41:06.285 read: IOPS=533, BW=2135KiB/s (2187kB/s)(20.9MiB/10010msec) 00:41:06.285 slat (nsec): min=5557, max=42030, avg=8302.76, stdev=4524.40 00:41:06.285 clat (usec): min=6012, max=35035, avg=29894.42, stdev=5101.06 00:41:06.285 lat (usec): min=6020, max=35042, avg=29902.73, stdev=5101.64 00:41:06.285 clat percentiles (usec): 00:41:06.285 | 1.00th=[15270], 5.00th=[20579], 10.00th=[21627], 20.00th=[23987], 00:41:06.285 | 30.00th=[31589], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:41:06.285 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:41:06.285 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:41:06.285 | 99.99th=[34866] 00:41:06.285 bw ( KiB/s): min= 1920, max= 2688, per=4.51%, avg=2131.20, stdev=200.35, samples=20 00:41:06.285 iops : min= 480, max= 672, avg=532.80, stdev=50.09, samples=20 00:41:06.285 lat (msec) : 10=0.60%, 20=3.09%, 50=96.31% 00:41:06.285 cpu : usr=98.78%, sys=0.80%, ctx=62, majf=0, minf=86 00:41:06.285 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:06.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.285 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.285 filename2: (groupid=0, jobs=1): err= 0: pid=32065: Wed Nov 6 10:33:08 2024 00:41:06.285 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10010msec) 00:41:06.285 slat (nsec): min=5547, max=62640, avg=7387.05, stdev=4331.53 00:41:06.285 clat (usec): min=19716, max=43964, avg=33078.32, stdev=1130.56 00:41:06.285 lat (usec): min=19727, max=43993, avg=33085.71, stdev=1130.27 00:41:06.285 clat percentiles (usec): 00:41:06.285 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:41:06.285 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:41:06.285 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:06.285 | 99.00th=[35914], 99.50th=[35914], 99.90th=[39060], 99.95th=[39060], 00:41:06.285 | 99.99th=[43779] 00:41:06.285 bw ( KiB/s): min= 1792, max= 2048, per=4.07%, avg=1926.74, stdev=51.80, samples=19 00:41:06.285 iops : min= 448, max= 512, avg=481.68, stdev=12.95, samples=19 00:41:06.285 lat (msec) : 20=0.04%, 50=99.96% 00:41:06.285 cpu : usr=98.96%, sys=0.72%, ctx=55, majf=0, minf=48 00:41:06.285 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:06.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.285 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.285 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.285 filename2: (groupid=0, jobs=1): err= 0: pid=32066: Wed Nov 6 10:33:08 2024 00:41:06.285 read: IOPS=506, BW=2028KiB/s (2077kB/s)(19.9MiB/10024msec) 00:41:06.285 slat (nsec): min=5545, max=71651, avg=13426.94, stdev=11003.42 00:41:06.285 clat (usec): min=11255, max=54800, avg=31445.92, stdev=5378.61 00:41:06.285 lat (usec): min=11272, max=54858, avg=31459.35, stdev=5380.57 00:41:06.285 clat percentiles (usec): 00:41:06.285 | 1.00th=[15401], 5.00th=[21365], 10.00th=[22938], 20.00th=[27919], 00:41:06.286 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:41:06.286 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[38536], 00:41:06.286 | 99.00th=[49021], 99.50th=[51643], 99.90th=[54789], 99.95th=[54789], 00:41:06.286 | 99.99th=[54789] 00:41:06.286 bw ( KiB/s): min= 1792, max= 2224, per=4.28%, avg=2026.60, stdev=112.95, samples=20 00:41:06.286 iops : min= 448, max= 556, avg=506.65, stdev=28.24, samples=20 00:41:06.286 lat (msec) : 20=2.24%, 50=96.85%, 100=0.91% 00:41:06.286 cpu : usr=98.92%, sys=0.78%, ctx=12, majf=0, minf=63 00:41:06.286 IO depths : 1=3.7%, 2=7.7%, 4=17.7%, 8=61.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:41:06.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 complete : 0=0.0%, 4=92.1%, 8=2.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 issued rwts: total=5082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.286 filename2: (groupid=0, jobs=1): err= 0: pid=32067: Wed Nov 6 10:33:08 2024 00:41:06.286 read: IOPS=483, BW=1932KiB/s (1979kB/s)(18.9MiB/10003msec) 00:41:06.286 slat (nsec): min=5553, max=75143, avg=19508.29, stdev=12571.23 00:41:06.286 clat (usec): min=6741, max=59111, avg=32945.94, stdev=2484.79 00:41:06.286 lat (usec): min=6747, max=59132, avg=32965.44, stdev=2484.88 00:41:06.286 clat percentiles (usec): 00:41:06.286 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:41:06.286 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:41:06.286 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:06.286 | 99.00th=[35914], 99.50th=[35914], 99.90th=[58983], 99.95th=[58983], 00:41:06.286 | 99.99th=[58983] 00:41:06.286 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1920.00, stdev=42.67, samples=19 00:41:06.286 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:41:06.286 lat (msec) : 10=0.33%, 20=0.33%, 50=99.01%, 100=0.33% 00:41:06.286 cpu : usr=98.63%, sys=0.90%, ctx=61, majf=0, minf=48 00:41:06.286 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:06.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.286 filename2: (groupid=0, jobs=1): err= 0: pid=32068: Wed Nov 6 10:33:08 2024 00:41:06.286 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10001msec) 00:41:06.286 slat (nsec): min=5545, max=73724, avg=21249.25, stdev=13739.50 00:41:06.286 clat (usec): min=16248, max=60588, avg=32596.97, stdev=3205.51 00:41:06.286 lat (usec): min=16257, max=60605, avg=32618.22, stdev=3206.38 00:41:06.286 clat percentiles (usec): 00:41:06.286 | 1.00th=[21365], 5.00th=[27395], 10.00th=[31851], 20.00th=[32113], 00:41:06.286 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:41:06.286 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:06.286 | 99.00th=[42730], 99.50th=[47449], 99.90th=[60556], 99.95th=[60556], 00:41:06.286 | 99.99th=[60556] 00:41:06.286 bw ( KiB/s): min= 1664, max= 2208, per=4.11%, avg=1946.95, stdev=103.97, samples=19 00:41:06.286 iops : min= 416, max= 552, avg=486.74, stdev=25.99, samples=19 00:41:06.286 lat (msec) : 20=0.33%, 50=99.18%, 100=0.49% 00:41:06.286 cpu : usr=98.62%, sys=0.91%, ctx=118, majf=0, minf=69 00:41:06.286 IO depths : 1=5.6%, 2=11.3%, 4=23.3%, 8=52.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:06.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.286 filename2: (groupid=0, jobs=1): err= 0: pid=32069: Wed Nov 6 10:33:08 2024 00:41:06.286 read: IOPS=482, BW=1932KiB/s (1978kB/s)(18.9MiB/10017msec) 00:41:06.286 slat (nsec): min=5556, max=54771, avg=14152.46, stdev=8265.46 00:41:06.286 clat (usec): min=17136, max=51035, avg=32998.38, stdev=1608.82 00:41:06.286 lat (usec): min=17147, max=51053, avg=33012.53, stdev=1608.49 00:41:06.286 clat percentiles (usec): 00:41:06.286 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:41:06.286 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:41:06.286 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:06.286 | 99.00th=[35914], 99.50th=[35914], 99.90th=[51119], 99.95th=[51119], 00:41:06.286 | 99.99th=[51119] 00:41:06.286 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1922.53, stdev=74.34, samples=19 00:41:06.286 iops : min= 448, max= 512, avg=480.63, stdev=18.58, samples=19 00:41:06.286 lat (msec) : 20=0.37%, 50=99.42%, 100=0.21% 00:41:06.286 cpu : usr=98.46%, sys=1.10%, ctx=45, majf=0, minf=37 00:41:06.286 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:06.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 issued rwts: total=4838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.286 filename2: (groupid=0, jobs=1): err= 0: pid=32070: Wed Nov 6 10:33:08 2024 00:41:06.286 read: IOPS=482, BW=1931KiB/s (1978kB/s)(18.9MiB/10008msec) 00:41:06.286 slat (nsec): min=5709, max=65406, avg=20253.43, stdev=10178.41 00:41:06.286 clat (usec): min=16224, max=46273, avg=32960.10, stdev=1539.08 00:41:06.286 lat (usec): min=16242, max=46290, avg=32980.35, stdev=1538.53 00:41:06.286 clat percentiles (usec): 00:41:06.286 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:41:06.286 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[33162], 00:41:06.286 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:41:06.286 | 99.00th=[35914], 99.50th=[35914], 99.90th=[46400], 99.95th=[46400], 00:41:06.286 | 99.99th=[46400] 00:41:06.286 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1920.16, stdev=59.99, samples=19 00:41:06.286 iops : min= 448, max= 512, avg=480.00, stdev=15.08, samples=19 00:41:06.286 lat (msec) : 20=0.33%, 50=99.67% 00:41:06.286 cpu : usr=98.94%, sys=0.77%, ctx=10, majf=0, minf=49 00:41:06.286 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:06.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.286 filename2: (groupid=0, jobs=1): err= 0: pid=32071: Wed Nov 6 10:33:08 2024 00:41:06.286 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10007msec) 00:41:06.286 slat (nsec): min=5615, max=71593, avg=11705.97, stdev=8683.80 00:41:06.286 clat (usec): min=12591, max=35984, avg=32925.03, stdev=1680.12 00:41:06.286 lat (usec): min=12599, max=35991, avg=32936.74, stdev=1679.17 00:41:06.286 clat percentiles (usec): 00:41:06.286 | 1.00th=[31065], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:41:06.286 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:41:06.286 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:41:06.286 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:41:06.286 | 99.99th=[35914] 00:41:06.286 bw ( KiB/s): min= 1920, max= 2052, per=4.09%, avg=1933.00, stdev=40.02, samples=20 00:41:06.286 iops : min= 480, max= 513, avg=483.25, stdev=10.00, samples=20 00:41:06.286 lat (msec) : 20=0.33%, 50=99.67% 00:41:06.286 cpu : usr=98.82%, sys=0.80%, ctx=118, majf=0, minf=49 00:41:06.286 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:06.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.286 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.286 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:06.286 00:41:06.286 Run status group 0 (all jobs): 00:41:06.286 READ: bw=46.2MiB/s (48.4MB/s), 1931KiB/s-2346KiB/s (1977kB/s-2403kB/s), io=463MiB (486MB), run=10001-10026msec 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:06.286 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.287 bdev_null0 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.287 [2024-11-06 10:33:08.493016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.287 bdev_null1 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:06.287 { 00:41:06.287 "params": { 00:41:06.287 "name": "Nvme$subsystem", 00:41:06.287 "trtype": "$TEST_TRANSPORT", 00:41:06.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:06.287 "adrfam": "ipv4", 00:41:06.287 "trsvcid": "$NVMF_PORT", 00:41:06.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:06.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:06.287 "hdgst": ${hdgst:-false}, 00:41:06.287 "ddgst": ${ddgst:-false} 00:41:06.287 }, 00:41:06.287 "method": "bdev_nvme_attach_controller" 00:41:06.287 } 00:41:06.287 EOF 00:41:06.287 )") 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:06.287 { 00:41:06.287 "params": { 00:41:06.287 "name": "Nvme$subsystem", 00:41:06.287 "trtype": "$TEST_TRANSPORT", 00:41:06.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:06.287 "adrfam": "ipv4", 00:41:06.287 "trsvcid": "$NVMF_PORT", 00:41:06.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:06.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:06.287 "hdgst": ${hdgst:-false}, 00:41:06.287 "ddgst": ${ddgst:-false} 00:41:06.287 }, 00:41:06.287 "method": "bdev_nvme_attach_controller" 00:41:06.287 } 00:41:06.287 EOF 00:41:06.287 )") 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:06.287 "params": { 00:41:06.287 "name": "Nvme0", 00:41:06.287 "trtype": "tcp", 00:41:06.287 "traddr": "10.0.0.2", 00:41:06.287 "adrfam": "ipv4", 00:41:06.287 "trsvcid": "4420", 00:41:06.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:06.287 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:06.287 "hdgst": false, 00:41:06.287 "ddgst": false 00:41:06.287 }, 00:41:06.287 "method": "bdev_nvme_attach_controller" 00:41:06.287 },{ 00:41:06.287 "params": { 00:41:06.287 "name": "Nvme1", 00:41:06.287 "trtype": "tcp", 00:41:06.287 "traddr": "10.0.0.2", 00:41:06.287 "adrfam": "ipv4", 00:41:06.287 "trsvcid": "4420", 00:41:06.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:06.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:06.287 "hdgst": false, 00:41:06.287 "ddgst": false 00:41:06.287 }, 00:41:06.287 "method": "bdev_nvme_attach_controller" 00:41:06.287 }' 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:06.287 10:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.287 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:06.287 ... 00:41:06.287 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:06.287 ... 00:41:06.287 fio-3.35 00:41:06.287 Starting 4 threads 00:41:11.571 00:41:11.571 filename0: (groupid=0, jobs=1): err= 0: pid=34919: Wed Nov 6 10:33:14 2024 00:41:11.571 read: IOPS=2199, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5003msec) 00:41:11.571 slat (nsec): min=5390, max=57019, avg=6093.14, stdev=1790.93 00:41:11.571 clat (usec): min=1572, max=6382, avg=3618.43, stdev=458.07 00:41:11.571 lat (usec): min=1578, max=6413, avg=3624.52, stdev=458.09 00:41:11.571 clat percentiles (usec): 00:41:11.571 | 1.00th=[ 2507], 5.00th=[ 2868], 10.00th=[ 2999], 20.00th=[ 3228], 00:41:11.571 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:41:11.571 | 70.00th=[ 3818], 80.00th=[ 3818], 90.00th=[ 3884], 95.00th=[ 3949], 00:41:11.571 | 99.00th=[ 5407], 99.50th=[ 5735], 99.90th=[ 5932], 99.95th=[ 6063], 00:41:11.571 | 99.99th=[ 6325] 00:41:11.571 bw ( KiB/s): min=16640, max=19808, per=26.37%, avg=17664.00, stdev=1268.15, samples=9 00:41:11.571 iops : min= 2080, max= 2476, avg=2208.00, stdev=158.52, samples=9 00:41:11.571 lat (msec) : 2=0.26%, 4=95.18%, 10=4.56% 00:41:11.571 cpu : usr=97.16%, sys=2.58%, ctx=6, majf=0, minf=54 00:41:11.571 IO depths : 1=0.1%, 2=3.9%, 4=68.7%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:11.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.571 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.571 issued rwts: total=11006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.571 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:11.571 filename0: (groupid=0, jobs=1): err= 0: pid=34920: Wed Nov 6 10:33:14 2024 00:41:11.571 read: IOPS=2067, BW=16.2MiB/s (16.9MB/s)(80.8MiB/5004msec) 00:41:11.571 slat (nsec): min=5397, max=41163, avg=8289.05, stdev=3423.19 00:41:11.571 clat (usec): min=2089, max=6015, avg=3851.06, stdev=311.42 00:41:11.571 lat (usec): min=2107, max=6021, avg=3859.35, stdev=311.51 00:41:11.571 clat percentiles (usec): 00:41:11.571 | 1.00th=[ 3195], 5.00th=[ 3556], 10.00th=[ 3589], 20.00th=[ 3752], 00:41:11.571 | 30.00th=[ 3785], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3818], 00:41:11.571 | 70.00th=[ 3818], 80.00th=[ 3851], 90.00th=[ 4146], 95.00th=[ 4555], 00:41:11.571 | 99.00th=[ 5211], 99.50th=[ 5473], 99.90th=[ 5800], 99.95th=[ 5997], 00:41:11.571 | 99.99th=[ 5997] 00:41:11.571 bw ( KiB/s): min=15840, max=17072, per=24.70%, avg=16544.00, stdev=469.64, samples=10 00:41:11.571 iops : min= 1980, max= 2134, avg=2068.00, stdev=58.70, samples=10 00:41:11.571 lat (msec) : 4=86.41%, 10=13.59% 00:41:11.571 cpu : usr=96.48%, sys=3.20%, ctx=21, majf=0, minf=44 00:41:11.571 IO depths : 1=0.1%, 2=0.1%, 4=64.5%, 8=35.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:11.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.571 complete : 0=0.0%, 4=98.5%, 8=1.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.571 issued rwts: total=10345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.571 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:11.571 filename1: (groupid=0, jobs=1): err= 0: pid=34921: Wed Nov 6 10:33:14 2024 00:41:11.571 read: IOPS=2077, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5004msec) 00:41:11.571 slat (nsec): min=7883, max=56679, avg=9102.75, stdev=3205.85 00:41:11.571 clat (usec): min=1668, max=6313, avg=3828.51, stdev=303.61 00:41:11.571 lat (usec): min=1676, max=6321, avg=3837.61, stdev=303.39 00:41:11.571 clat percentiles (usec): 00:41:11.571 | 1.00th=[ 3032], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3752], 00:41:11.571 | 30.00th=[ 3785], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3818], 00:41:11.571 | 70.00th=[ 3818], 80.00th=[ 3851], 90.00th=[ 4113], 95.00th=[ 4424], 00:41:11.571 | 99.00th=[ 4817], 99.50th=[ 5407], 99.90th=[ 5997], 99.95th=[ 6259], 00:41:11.571 | 99.99th=[ 6325] 00:41:11.571 bw ( KiB/s): min=15632, max=17120, per=24.81%, avg=16622.40, stdev=511.08, samples=10 00:41:11.571 iops : min= 1954, max= 2140, avg=2077.80, stdev=63.88, samples=10 00:41:11.571 lat (msec) : 2=0.06%, 4=87.22%, 10=12.72% 00:41:11.571 cpu : usr=96.92%, sys=2.78%, ctx=40, majf=0, minf=27 00:41:11.571 IO depths : 1=0.1%, 2=0.2%, 4=67.3%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:11.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.571 complete : 0=0.0%, 4=96.2%, 8=3.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.571 issued rwts: total=10395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.571 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:11.571 filename1: (groupid=0, jobs=1): err= 0: pid=34922: Wed Nov 6 10:33:14 2024 00:41:11.571 read: IOPS=2029, BW=15.9MiB/s (16.6MB/s)(79.3MiB/5002msec) 00:41:11.571 slat (nsec): min=5399, max=40826, avg=7909.75, stdev=2988.02 00:41:11.571 clat (usec): min=1832, max=8523, avg=3919.87, stdev=449.21 00:41:11.571 lat (usec): min=1838, max=8547, avg=3927.78, stdev=449.05 00:41:11.571 clat percentiles (usec): 00:41:11.571 | 1.00th=[ 3359], 5.00th=[ 3589], 10.00th=[ 3654], 20.00th=[ 3752], 00:41:11.571 | 30.00th=[ 3785], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3818], 00:41:11.571 | 70.00th=[ 3818], 80.00th=[ 3949], 90.00th=[ 4359], 95.00th=[ 4686], 00:41:11.571 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 6259], 99.95th=[ 6259], 00:41:11.571 | 99.99th=[ 8455] 00:41:11.571 bw ( KiB/s): min=15664, max=16768, per=24.18%, avg=16197.33, stdev=440.00, samples=9 00:41:11.571 iops : min= 1958, max= 2096, avg=2024.67, stdev=55.00, samples=9 00:41:11.571 lat (msec) : 2=0.05%, 4=81.76%, 10=18.19% 00:41:11.571 cpu : usr=96.92%, sys=2.84%, ctx=6, majf=0, minf=46 00:41:11.571 IO depths : 1=0.1%, 2=0.1%, 4=72.9%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:11.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.571 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.571 issued rwts: total=10153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.571 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:11.571 00:41:11.571 Run status group 0 (all jobs): 00:41:11.571 READ: bw=65.4MiB/s (68.6MB/s), 15.9MiB/s-17.2MiB/s (16.6MB/s-18.0MB/s), io=327MiB (343MB), run=5002-5004msec 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:11.571 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.571 00:41:11.571 real 0m24.535s 00:41:11.572 user 5m14.871s 00:41:11.572 sys 0m4.452s 00:41:11.572 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:11.572 10:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:11.572 ************************************ 00:41:11.572 END TEST fio_dif_rand_params 00:41:11.572 ************************************ 00:41:11.572 10:33:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:11.572 10:33:14 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:11.572 10:33:14 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:11.572 10:33:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:11.572 ************************************ 00:41:11.572 START TEST fio_dif_digest 00:41:11.572 ************************************ 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:11.572 bdev_null0 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.572 10:33:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:11.572 [2024-11-06 10:33:15.020747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:11.572 { 00:41:11.572 "params": { 00:41:11.572 "name": "Nvme$subsystem", 00:41:11.572 "trtype": "$TEST_TRANSPORT", 00:41:11.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:11.572 "adrfam": "ipv4", 00:41:11.572 "trsvcid": "$NVMF_PORT", 00:41:11.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:11.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:11.572 "hdgst": ${hdgst:-false}, 00:41:11.572 "ddgst": ${ddgst:-false} 00:41:11.572 }, 00:41:11.572 "method": "bdev_nvme_attach_controller" 00:41:11.572 } 00:41:11.572 EOF 00:41:11.572 )") 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:11.572 "params": { 00:41:11.572 "name": "Nvme0", 00:41:11.572 "trtype": "tcp", 00:41:11.572 "traddr": "10.0.0.2", 00:41:11.572 "adrfam": "ipv4", 00:41:11.572 "trsvcid": "4420", 00:41:11.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:11.572 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:11.572 "hdgst": true, 00:41:11.572 "ddgst": true 00:41:11.572 }, 00:41:11.572 "method": "bdev_nvme_attach_controller" 00:41:11.572 }' 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:11.572 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:11.876 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:11.876 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:41:11.876 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:11.876 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:11.876 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:11.876 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:11.876 10:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:12.139 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:12.139 ... 00:41:12.139 fio-3.35 00:41:12.139 Starting 3 threads 00:41:24.374 00:41:24.374 filename0: (groupid=0, jobs=1): err= 0: pid=36324: Wed Nov 6 10:33:26 2024 00:41:24.374 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(353MiB/10046msec) 00:41:24.374 slat (nsec): min=5766, max=48263, avg=6924.49, stdev=1513.78 00:41:24.374 clat (usec): min=5426, max=53346, avg=10636.75, stdev=2400.27 00:41:24.374 lat (usec): min=5436, max=53353, avg=10643.67, stdev=2400.25 00:41:24.374 clat percentiles (usec): 00:41:24.374 | 1.00th=[ 5997], 5.00th=[ 6980], 10.00th=[ 7963], 20.00th=[ 8586], 00:41:24.374 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10945], 60.00th=[11731], 00:41:24.374 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:41:24.374 | 99.00th=[14353], 99.50th=[14615], 99.90th=[15926], 99.95th=[50594], 00:41:24.374 | 99.99th=[53216] 00:41:24.374 bw ( KiB/s): min=33536, max=39936, per=44.18%, avg=36147.20, stdev=1731.50, samples=20 00:41:24.374 iops : min= 262, max= 312, avg=282.40, stdev=13.53, samples=20 00:41:24.374 lat (msec) : 10=44.11%, 20=55.82%, 100=0.07% 00:41:24.374 cpu : usr=92.86%, sys=6.01%, ctx=807, majf=0, minf=198 00:41:24.374 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:24.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.374 issued rwts: total=2827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:24.374 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:24.374 filename0: (groupid=0, jobs=1): err= 0: pid=36325: Wed Nov 6 10:33:26 2024 00:41:24.374 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(251MiB/10045msec) 00:41:24.374 slat (nsec): min=5743, max=30478, avg=6487.32, stdev=893.83 00:41:24.374 clat (usec): min=8502, max=94556, avg=14962.51, stdev=9382.92 00:41:24.374 lat (usec): min=8508, max=94563, avg=14969.00, stdev=9382.99 00:41:24.374 clat percentiles (usec): 00:41:24.374 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10683], 00:41:24.374 | 30.00th=[11731], 40.00th=[12911], 50.00th=[13435], 60.00th=[13960], 00:41:24.374 | 70.00th=[14353], 80.00th=[14877], 90.00th=[15795], 95.00th=[47973], 00:41:24.374 | 99.00th=[54789], 99.50th=[55313], 99.90th=[91751], 99.95th=[93848], 00:41:24.374 | 99.99th=[94897] 00:41:24.374 bw ( KiB/s): min=17152, max=29184, per=31.42%, avg=25702.40, stdev=2977.81, samples=20 00:41:24.374 iops : min= 134, max= 228, avg=200.80, stdev=23.26, samples=20 00:41:24.374 lat (msec) : 10=9.55%, 20=85.42%, 50=0.15%, 100=4.88% 00:41:24.374 cpu : usr=95.59%, sys=4.19%, ctx=25, majf=0, minf=138 00:41:24.374 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:24.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.374 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:24.374 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:24.374 filename0: (groupid=0, jobs=1): err= 0: pid=36326: Wed Nov 6 10:33:26 2024 00:41:24.374 read: IOPS=157, BW=19.7MiB/s (20.7MB/s)(198MiB/10046msec) 00:41:24.374 slat (nsec): min=5788, max=31358, avg=6539.95, stdev=1074.14 00:41:24.374 clat (usec): min=7939, max=96671, avg=18990.67, stdev=14643.30 00:41:24.374 lat (usec): min=7945, max=96678, avg=18997.21, stdev=14643.30 00:41:24.374 clat percentiles (usec): 00:41:24.374 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10945], 20.00th=[12649], 00:41:24.374 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14091], 60.00th=[14353], 00:41:24.374 | 70.00th=[14877], 80.00th=[15533], 90.00th=[53216], 95.00th=[54789], 00:41:24.374 | 99.00th=[56361], 99.50th=[93848], 99.90th=[95945], 99.95th=[96994], 00:41:24.374 | 99.99th=[96994] 00:41:24.374 bw ( KiB/s): min=14592, max=25344, per=24.73%, avg=20236.80, stdev=2865.75, samples=20 00:41:24.374 iops : min= 114, max= 198, avg=158.10, stdev=22.39, samples=20 00:41:24.374 lat (msec) : 10=3.54%, 20=83.71%, 50=0.06%, 100=12.69% 00:41:24.374 cpu : usr=95.28%, sys=4.51%, ctx=21, majf=0, minf=46 00:41:24.374 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:24.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.374 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:24.374 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:24.374 00:41:24.374 Run status group 0 (all jobs): 00:41:24.374 READ: bw=79.9MiB/s (83.8MB/s), 19.7MiB/s-35.2MiB/s (20.7MB/s-36.9MB/s), io=803MiB (842MB), run=10045-10046msec 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.374 00:41:24.374 real 0m11.286s 00:41:24.374 user 0m40.151s 00:41:24.374 sys 0m1.805s 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:24.374 10:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:24.375 ************************************ 00:41:24.375 END TEST fio_dif_digest 00:41:24.375 ************************************ 00:41:24.375 10:33:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:24.375 10:33:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:24.375 rmmod nvme_tcp 00:41:24.375 rmmod nvme_fabrics 00:41:24.375 rmmod nvme_keyring 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 25574 ']' 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 25574 00:41:24.375 10:33:26 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 25574 ']' 00:41:24.375 10:33:26 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 25574 00:41:24.375 10:33:26 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:41:24.375 10:33:26 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:24.375 10:33:26 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 25574 00:41:24.375 10:33:26 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:24.375 10:33:26 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:24.375 10:33:26 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 25574' 00:41:24.375 killing process with pid 25574 00:41:24.375 10:33:26 nvmf_dif -- common/autotest_common.sh@971 -- # kill 25574 00:41:24.375 10:33:26 nvmf_dif -- common/autotest_common.sh@976 -- # wait 25574 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:24.375 10:33:26 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:26.916 Waiting for block devices as requested 00:41:26.916 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:27.213 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:27.213 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:27.213 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:27.473 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:27.473 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:27.473 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:27.733 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:27.733 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:27.992 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:27.992 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:27.992 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:27.992 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:28.252 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:28.252 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:28.252 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:28.252 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:28.821 10:33:32 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:28.821 10:33:32 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:28.821 10:33:32 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:28.821 10:33:32 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:28.821 10:33:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:28.821 10:33:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:28.821 10:33:32 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:28.821 10:33:32 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:28.821 10:33:32 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:28.821 10:33:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:28.821 10:33:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:30.730 10:33:34 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:30.730 00:41:30.730 real 1m20.423s 00:41:30.730 user 7m56.876s 00:41:30.730 sys 0m23.156s 00:41:30.730 10:33:34 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:30.730 10:33:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:30.730 ************************************ 00:41:30.730 END TEST nvmf_dif 00:41:30.730 ************************************ 00:41:30.731 10:33:34 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:30.731 10:33:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:30.731 10:33:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:30.731 10:33:34 -- common/autotest_common.sh@10 -- # set +x 00:41:30.731 ************************************ 00:41:30.731 START TEST nvmf_abort_qd_sizes 00:41:30.731 ************************************ 00:41:30.731 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:30.992 * Looking for test storage... 00:41:30.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:30.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.992 --rc genhtml_branch_coverage=1 00:41:30.992 --rc genhtml_function_coverage=1 00:41:30.992 --rc genhtml_legend=1 00:41:30.992 --rc geninfo_all_blocks=1 00:41:30.992 --rc geninfo_unexecuted_blocks=1 00:41:30.992 00:41:30.992 ' 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:30.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.992 --rc genhtml_branch_coverage=1 00:41:30.992 --rc genhtml_function_coverage=1 00:41:30.992 --rc genhtml_legend=1 00:41:30.992 --rc geninfo_all_blocks=1 00:41:30.992 --rc geninfo_unexecuted_blocks=1 00:41:30.992 00:41:30.992 ' 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:30.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.992 --rc genhtml_branch_coverage=1 00:41:30.992 --rc genhtml_function_coverage=1 00:41:30.992 --rc genhtml_legend=1 00:41:30.992 --rc geninfo_all_blocks=1 00:41:30.992 --rc geninfo_unexecuted_blocks=1 00:41:30.992 00:41:30.992 ' 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:30.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.992 --rc genhtml_branch_coverage=1 00:41:30.992 --rc genhtml_function_coverage=1 00:41:30.992 --rc genhtml_legend=1 00:41:30.992 --rc geninfo_all_blocks=1 00:41:30.992 --rc geninfo_unexecuted_blocks=1 00:41:30.992 00:41:30.992 ' 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:30.992 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:30.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:30.993 10:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:39.126 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:39.126 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:39.126 Found net devices under 0000:31:00.0: cvl_0_0 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:39.126 Found net devices under 0000:31:00.1: cvl_0_1 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:39.126 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:39.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:39.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:41:39.126 00:41:39.126 --- 10.0.0.2 ping statistics --- 00:41:39.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.127 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:41:39.127 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:39.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:39.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:41:39.127 00:41:39.127 --- 10.0.0.1 ping statistics --- 00:41:39.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.127 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:41:39.127 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:39.387 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:41:39.387 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:39.387 10:33:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:43.671 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:43.671 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:43.671 10:33:46 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:43.671 10:33:46 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:43.671 10:33:46 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:43.671 10:33:46 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:43.671 10:33:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:43.671 10:33:46 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=46733 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 46733 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 46733 ']' 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:43.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:43.671 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:43.671 [2024-11-06 10:33:47.089101] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:41:43.671 [2024-11-06 10:33:47.089149] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:43.929 [2024-11-06 10:33:47.175135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:43.929 [2024-11-06 10:33:47.212319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:43.929 [2024-11-06 10:33:47.212355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:43.929 [2024-11-06 10:33:47.212362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:43.929 [2024-11-06 10:33:47.212369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:43.929 [2024-11-06 10:33:47.212375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:43.929 [2024-11-06 10:33:47.213894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:43.929 [2024-11-06 10:33:47.214117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:43.929 [2024-11-06 10:33:47.214118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:43.929 [2024-11-06 10:33:47.213968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:44.494 10:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:44.494 ************************************ 00:41:44.494 START TEST spdk_target_abort 00:41:44.494 ************************************ 00:41:44.494 10:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:41:44.494 10:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:44.494 10:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:41:44.494 10:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.494 10:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:45.060 spdk_targetn1 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:45.060 [2024-11-06 10:33:48.295878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:45.060 [2024-11-06 10:33:48.344355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:45.060 10:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:45.060 [2024-11-06 10:33:48.516037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1424 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:41:45.060 [2024-11-06 10:33:48.516068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00b3 p:1 m:0 dnr:0 00:41:45.060 [2024-11-06 10:33:48.535905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2264 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:41:45.060 [2024-11-06 10:33:48.535922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:45.060 [2024-11-06 10:33:48.536232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2296 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:41:45.060 [2024-11-06 10:33:48.536245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:48.344 Initializing NVMe Controllers 00:41:48.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:48.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:48.344 Initialization complete. Launching workers. 00:41:48.344 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17313, failed: 3 00:41:48.344 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3572, failed to submit 13744 00:41:48.344 success 673, unsuccessful 2899, failed 0 00:41:48.344 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:48.344 10:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:48.344 [2024-11-06 10:33:51.707092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:832 len:8 PRP1 0x200004e54000 PRP2 0x0 00:41:48.344 [2024-11-06 10:33:51.707144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:41:48.344 [2024-11-06 10:33:51.730039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:1424 len:8 PRP1 0x200004e52000 PRP2 0x0 00:41:48.344 [2024-11-06 10:33:51.730067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:00b9 p:1 m:0 dnr:0 00:41:48.344 [2024-11-06 10:33:51.809987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:3248 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:41:48.345 [2024-11-06 10:33:51.810013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:009b p:0 m:0 dnr:0 00:41:48.345 [2024-11-06 10:33:51.842118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:4008 len:8 PRP1 0x200004e58000 PRP2 0x0 00:41:48.345 [2024-11-06 10:33:51.842147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:51.625 Initializing NVMe Controllers 00:41:51.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:51.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:51.625 Initialization complete. Launching workers. 00:41:51.625 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8570, failed: 4 00:41:51.625 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1243, failed to submit 7331 00:41:51.625 success 347, unsuccessful 896, failed 0 00:41:51.625 10:33:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:51.625 10:33:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:54.902 Initializing NVMe Controllers 00:41:54.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:54.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:54.902 Initialization complete. Launching workers. 00:41:54.902 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41827, failed: 0 00:41:54.902 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2582, failed to submit 39245 00:41:54.902 success 614, unsuccessful 1968, failed 0 00:41:54.902 10:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:54.902 10:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.902 10:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:54.902 10:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.902 10:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:54.902 10:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.902 10:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:56.800 10:33:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.800 10:33:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 46733 00:41:56.800 10:33:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 46733 ']' 00:41:56.800 10:33:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 46733 00:41:56.800 10:33:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:41:56.800 10:33:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:56.800 10:33:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 46733 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 46733' 00:41:56.800 killing process with pid 46733 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 46733 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 46733 00:41:56.800 00:41:56.800 real 0m12.156s 00:41:56.800 user 0m49.548s 00:41:56.800 sys 0m1.887s 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:56.800 ************************************ 00:41:56.800 END TEST spdk_target_abort 00:41:56.800 ************************************ 00:41:56.800 10:34:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:56.800 10:34:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:56.800 10:34:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:56.800 10:34:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:56.800 ************************************ 00:41:56.800 START TEST kernel_target_abort 00:41:56.800 ************************************ 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:56.800 10:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:00.998 Waiting for block devices as requested 00:42:00.998 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:00.998 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:00.998 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:00.998 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:00.998 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:01.257 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:01.257 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:01.257 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:01.518 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:01.518 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:01.777 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:01.777 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:01.777 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:01.777 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:02.037 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:02.037 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:02.037 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:02.296 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:02.296 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:02.296 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:02.296 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:42:02.296 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:02.296 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:42:02.296 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:02.296 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:02.296 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:02.296 No valid GPT data, bailing 00:42:02.296 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:02.555 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:42:02.555 00:42:02.555 Discovery Log Number of Records 2, Generation counter 2 00:42:02.555 =====Discovery Log Entry 0====== 00:42:02.555 trtype: tcp 00:42:02.555 adrfam: ipv4 00:42:02.555 subtype: current discovery subsystem 00:42:02.555 treq: not specified, sq flow control disable supported 00:42:02.555 portid: 1 00:42:02.555 trsvcid: 4420 00:42:02.555 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:02.555 traddr: 10.0.0.1 00:42:02.555 eflags: none 00:42:02.555 sectype: none 00:42:02.556 =====Discovery Log Entry 1====== 00:42:02.556 trtype: tcp 00:42:02.556 adrfam: ipv4 00:42:02.556 subtype: nvme subsystem 00:42:02.556 treq: not specified, sq flow control disable supported 00:42:02.556 portid: 1 00:42:02.556 trsvcid: 4420 00:42:02.556 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:02.556 traddr: 10.0.0.1 00:42:02.556 eflags: none 00:42:02.556 sectype: none 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:02.556 10:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:05.843 Initializing NVMe Controllers 00:42:05.843 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:05.843 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:05.843 Initialization complete. Launching workers. 00:42:05.843 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67512, failed: 0 00:42:05.843 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67512, failed to submit 0 00:42:05.843 success 0, unsuccessful 67512, failed 0 00:42:05.843 10:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:05.843 10:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:09.134 Initializing NVMe Controllers 00:42:09.134 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:09.134 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:09.134 Initialization complete. Launching workers. 00:42:09.134 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107789, failed: 0 00:42:09.134 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27162, failed to submit 80627 00:42:09.134 success 0, unsuccessful 27162, failed 0 00:42:09.134 10:34:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:09.134 10:34:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:12.426 Initializing NVMe Controllers 00:42:12.426 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:12.426 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:12.426 Initialization complete. Launching workers. 00:42:12.426 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101957, failed: 0 00:42:12.427 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25482, failed to submit 76475 00:42:12.427 success 0, unsuccessful 25482, failed 0 00:42:12.427 10:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:12.427 10:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:12.427 10:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:12.427 10:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:12.427 10:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:12.427 10:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:12.427 10:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:12.427 10:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:12.427 10:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:12.427 10:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:15.723 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:15.723 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:15.724 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:15.724 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:15.724 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:17.635 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:17.635 00:42:17.635 real 0m20.775s 00:42:17.635 user 0m10.056s 00:42:17.635 sys 0m6.437s 00:42:17.635 10:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:17.635 10:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:17.635 ************************************ 00:42:17.635 END TEST kernel_target_abort 00:42:17.635 ************************************ 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:17.635 rmmod nvme_tcp 00:42:17.635 rmmod nvme_fabrics 00:42:17.635 rmmod nvme_keyring 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 46733 ']' 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 46733 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 46733 ']' 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 46733 00:42:17.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (46733) - No such process 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 46733 is not found' 00:42:17.635 Process with pid 46733 is not found 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:17.635 10:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:20.924 Waiting for block devices as requested 00:42:20.924 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:20.924 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:20.924 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:21.182 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:21.182 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:21.182 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:21.441 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:21.441 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:21.441 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:21.701 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:21.701 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:21.962 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:21.962 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:21.962 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:21.962 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:22.223 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:22.223 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:22.484 10:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:22.484 10:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:22.484 10:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:22.484 10:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:22.484 10:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:22.484 10:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:22.484 10:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:22.484 10:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:22.484 10:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:22.484 10:34:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:22.484 10:34:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:25.026 10:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:25.026 00:42:25.026 real 0m53.753s 00:42:25.026 user 1m5.172s 00:42:25.026 sys 0m20.094s 00:42:25.026 10:34:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:25.026 10:34:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:25.026 ************************************ 00:42:25.026 END TEST nvmf_abort_qd_sizes 00:42:25.026 ************************************ 00:42:25.026 10:34:27 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:25.026 10:34:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:25.026 10:34:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:25.026 10:34:27 -- common/autotest_common.sh@10 -- # set +x 00:42:25.026 ************************************ 00:42:25.026 START TEST keyring_file 00:42:25.026 ************************************ 00:42:25.026 10:34:28 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:25.026 * Looking for test storage... 00:42:25.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:25.026 10:34:28 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:25.026 10:34:28 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:42:25.026 10:34:28 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:25.026 10:34:28 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:25.026 10:34:28 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:25.026 10:34:28 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:25.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.026 --rc genhtml_branch_coverage=1 00:42:25.026 --rc genhtml_function_coverage=1 00:42:25.026 --rc genhtml_legend=1 00:42:25.026 --rc geninfo_all_blocks=1 00:42:25.026 --rc geninfo_unexecuted_blocks=1 00:42:25.026 00:42:25.026 ' 00:42:25.026 10:34:28 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:25.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.026 --rc genhtml_branch_coverage=1 00:42:25.026 --rc genhtml_function_coverage=1 00:42:25.026 --rc genhtml_legend=1 00:42:25.026 --rc geninfo_all_blocks=1 00:42:25.026 --rc geninfo_unexecuted_blocks=1 00:42:25.026 00:42:25.026 ' 00:42:25.026 10:34:28 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:25.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.026 --rc genhtml_branch_coverage=1 00:42:25.026 --rc genhtml_function_coverage=1 00:42:25.026 --rc genhtml_legend=1 00:42:25.026 --rc geninfo_all_blocks=1 00:42:25.026 --rc geninfo_unexecuted_blocks=1 00:42:25.026 00:42:25.026 ' 00:42:25.026 10:34:28 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:25.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.026 --rc genhtml_branch_coverage=1 00:42:25.026 --rc genhtml_function_coverage=1 00:42:25.026 --rc genhtml_legend=1 00:42:25.026 --rc geninfo_all_blocks=1 00:42:25.026 --rc geninfo_unexecuted_blocks=1 00:42:25.026 00:42:25.026 ' 00:42:25.026 10:34:28 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:25.026 10:34:28 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:25.026 10:34:28 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:25.026 10:34:28 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:25.027 10:34:28 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:25.027 10:34:28 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:25.027 10:34:28 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.027 10:34:28 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.027 10:34:28 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.027 10:34:28 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:25.027 10:34:28 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:25.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3HeIAwKEmL 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3HeIAwKEmL 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3HeIAwKEmL 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.3HeIAwKEmL 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eQbYoFtLn0 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:25.027 10:34:28 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eQbYoFtLn0 00:42:25.027 10:34:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eQbYoFtLn0 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.eQbYoFtLn0 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@30 -- # tgtpid=57492 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@32 -- # waitforlisten 57492 00:42:25.027 10:34:28 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:25.027 10:34:28 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 57492 ']' 00:42:25.027 10:34:28 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:25.027 10:34:28 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:25.027 10:34:28 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:25.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:25.027 10:34:28 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:25.027 10:34:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:25.027 [2024-11-06 10:34:28.433004] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:25.027 [2024-11-06 10:34:28.433062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57492 ] 00:42:25.027 [2024-11-06 10:34:28.509695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:25.287 [2024-11-06 10:34:28.546595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:25.855 10:34:29 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:42:25.856 10:34:29 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:25.856 [2024-11-06 10:34:29.218354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:25.856 null0 00:42:25.856 [2024-11-06 10:34:29.250397] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:25.856 [2024-11-06 10:34:29.250695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:25.856 10:34:29 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:25.856 [2024-11-06 10:34:29.282467] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:25.856 request: 00:42:25.856 { 00:42:25.856 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:25.856 "secure_channel": false, 00:42:25.856 "listen_address": { 00:42:25.856 "trtype": "tcp", 00:42:25.856 "traddr": "127.0.0.1", 00:42:25.856 "trsvcid": "4420" 00:42:25.856 }, 00:42:25.856 "method": "nvmf_subsystem_add_listener", 00:42:25.856 "req_id": 1 00:42:25.856 } 00:42:25.856 Got JSON-RPC error response 00:42:25.856 response: 00:42:25.856 { 00:42:25.856 "code": -32602, 00:42:25.856 "message": "Invalid parameters" 00:42:25.856 } 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:25.856 10:34:29 keyring_file -- keyring/file.sh@47 -- # bperfpid=57579 00:42:25.856 10:34:29 keyring_file -- keyring/file.sh@49 -- # waitforlisten 57579 /var/tmp/bperf.sock 00:42:25.856 10:34:29 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 57579 ']' 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:25.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:25.856 10:34:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:25.856 [2024-11-06 10:34:29.339779] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:25.856 [2024-11-06 10:34:29.339829] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57579 ] 00:42:26.115 [2024-11-06 10:34:29.435087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:26.115 [2024-11-06 10:34:29.471102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:26.683 10:34:30 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:26.683 10:34:30 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:42:26.683 10:34:30 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3HeIAwKEmL 00:42:26.683 10:34:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3HeIAwKEmL 00:42:26.942 10:34:30 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eQbYoFtLn0 00:42:26.942 10:34:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eQbYoFtLn0 00:42:27.201 10:34:30 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:27.201 10:34:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:27.201 10:34:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:27.201 10:34:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:27.201 10:34:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.201 10:34:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3HeIAwKEmL == \/\t\m\p\/\t\m\p\.\3\H\e\I\A\w\K\E\m\L ]] 00:42:27.201 10:34:30 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:27.201 10:34:30 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:27.201 10:34:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:27.201 10:34:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:27.201 10:34:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.459 10:34:30 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.eQbYoFtLn0 == \/\t\m\p\/\t\m\p\.\e\Q\b\Y\o\F\t\L\n\0 ]] 00:42:27.459 10:34:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:27.459 10:34:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:27.459 10:34:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:27.459 10:34:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:27.459 10:34:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:27.459 10:34:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.718 10:34:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:27.718 10:34:31 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:27.718 10:34:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:27.718 10:34:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:27.718 10:34:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:27.718 10:34:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.718 10:34:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:27.718 10:34:31 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:27.718 10:34:31 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:27.718 10:34:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:27.977 [2024-11-06 10:34:31.333826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:27.977 nvme0n1 00:42:27.977 10:34:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:27.977 10:34:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:27.977 10:34:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:27.977 10:34:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:27.977 10:34:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.977 10:34:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:28.236 10:34:31 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:28.236 10:34:31 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:28.236 10:34:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:28.236 10:34:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:28.236 10:34:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:28.236 10:34:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:28.236 10:34:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:28.495 10:34:31 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:28.495 10:34:31 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:28.495 Running I/O for 1 seconds... 00:42:29.433 16473.00 IOPS, 64.35 MiB/s 00:42:29.433 Latency(us) 00:42:29.433 [2024-11-06T09:34:32.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:29.433 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:29.433 nvme0n1 : 1.01 16485.19 64.40 0.00 0.00 7734.83 5734.40 16384.00 00:42:29.433 [2024-11-06T09:34:32.934Z] =================================================================================================================== 00:42:29.433 [2024-11-06T09:34:32.935Z] Total : 16485.19 64.40 0.00 0.00 7734.83 5734.40 16384.00 00:42:29.434 { 00:42:29.434 "results": [ 00:42:29.434 { 00:42:29.434 "job": "nvme0n1", 00:42:29.434 "core_mask": "0x2", 00:42:29.434 "workload": "randrw", 00:42:29.434 "percentage": 50, 00:42:29.434 "status": "finished", 00:42:29.434 "queue_depth": 128, 00:42:29.434 "io_size": 4096, 00:42:29.434 "runtime": 1.007086, 00:42:29.434 "iops": 16485.185972200983, 00:42:29.434 "mibps": 64.39525770391009, 00:42:29.434 "io_failed": 0, 00:42:29.434 "io_timeout": 0, 00:42:29.434 "avg_latency_us": 7734.832911697386, 00:42:29.434 "min_latency_us": 5734.4, 00:42:29.434 "max_latency_us": 16384.0 00:42:29.434 } 00:42:29.434 ], 00:42:29.434 "core_count": 1 00:42:29.434 } 00:42:29.434 10:34:32 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:29.434 10:34:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:29.693 10:34:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:29.693 10:34:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:29.693 10:34:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:29.693 10:34:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:29.693 10:34:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:29.693 10:34:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:29.953 10:34:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:29.953 10:34:33 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:29.953 10:34:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:29.953 10:34:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:29.953 10:34:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:29.954 10:34:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:29.954 10:34:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:29.954 10:34:33 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:29.954 10:34:33 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:29.954 10:34:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:29.954 10:34:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:29.954 10:34:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:30.213 10:34:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:30.213 10:34:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:30.213 10:34:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:30.213 10:34:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:30.213 10:34:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:30.213 [2024-11-06 10:34:33.613698] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:30.213 [2024-11-06 10:34:33.614683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183f9d0 (107): Transport endpoint is not connected 00:42:30.213 [2024-11-06 10:34:33.615679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183f9d0 (9): Bad file descriptor 00:42:30.213 [2024-11-06 10:34:33.616681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:30.213 [2024-11-06 10:34:33.616690] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:30.213 [2024-11-06 10:34:33.616696] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:30.213 [2024-11-06 10:34:33.616703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:30.213 request: 00:42:30.213 { 00:42:30.213 "name": "nvme0", 00:42:30.213 "trtype": "tcp", 00:42:30.213 "traddr": "127.0.0.1", 00:42:30.213 "adrfam": "ipv4", 00:42:30.213 "trsvcid": "4420", 00:42:30.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:30.213 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:30.213 "prchk_reftag": false, 00:42:30.213 "prchk_guard": false, 00:42:30.213 "hdgst": false, 00:42:30.213 "ddgst": false, 00:42:30.213 "psk": "key1", 00:42:30.213 "allow_unrecognized_csi": false, 00:42:30.213 "method": "bdev_nvme_attach_controller", 00:42:30.213 "req_id": 1 00:42:30.213 } 00:42:30.213 Got JSON-RPC error response 00:42:30.213 response: 00:42:30.213 { 00:42:30.213 "code": -5, 00:42:30.213 "message": "Input/output error" 00:42:30.213 } 00:42:30.213 10:34:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:30.213 10:34:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:30.213 10:34:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:30.213 10:34:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:30.213 10:34:33 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:30.213 10:34:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:30.213 10:34:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:30.213 10:34:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:30.213 10:34:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:30.213 10:34:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:30.517 10:34:33 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:30.517 10:34:33 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:30.517 10:34:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:30.517 10:34:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:30.517 10:34:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:30.517 10:34:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:30.517 10:34:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:30.778 10:34:33 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:30.778 10:34:33 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:30.778 10:34:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:30.778 10:34:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:30.778 10:34:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:31.038 10:34:34 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:31.038 10:34:34 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:31.038 10:34:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:31.038 10:34:34 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:31.038 10:34:34 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.3HeIAwKEmL 00:42:31.038 10:34:34 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.3HeIAwKEmL 00:42:31.038 10:34:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:31.038 10:34:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.3HeIAwKEmL 00:42:31.038 10:34:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:31.038 10:34:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:31.038 10:34:34 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:31.038 10:34:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:31.038 10:34:34 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3HeIAwKEmL 00:42:31.038 10:34:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3HeIAwKEmL 00:42:31.296 [2024-11-06 10:34:34.632330] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.3HeIAwKEmL': 0100660 00:42:31.296 [2024-11-06 10:34:34.632350] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:31.296 request: 00:42:31.296 { 00:42:31.296 "name": "key0", 00:42:31.296 "path": "/tmp/tmp.3HeIAwKEmL", 00:42:31.296 "method": "keyring_file_add_key", 00:42:31.296 "req_id": 1 00:42:31.296 } 00:42:31.296 Got JSON-RPC error response 00:42:31.296 response: 00:42:31.296 { 00:42:31.296 "code": -1, 00:42:31.296 "message": "Operation not permitted" 00:42:31.296 } 00:42:31.296 10:34:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:31.296 10:34:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:31.296 10:34:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:31.296 10:34:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:31.296 10:34:34 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.3HeIAwKEmL 00:42:31.297 10:34:34 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3HeIAwKEmL 00:42:31.297 10:34:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3HeIAwKEmL 00:42:31.555 10:34:34 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.3HeIAwKEmL 00:42:31.555 10:34:34 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:31.555 10:34:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:31.555 10:34:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:31.555 10:34:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:31.555 10:34:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:31.555 10:34:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:31.555 10:34:34 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:31.555 10:34:34 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:31.555 10:34:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:31.556 10:34:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:31.556 10:34:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:31.556 10:34:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:31.556 10:34:35 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:31.556 10:34:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:31.556 10:34:35 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:31.556 10:34:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:31.815 [2024-11-06 10:34:35.157664] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.3HeIAwKEmL': No such file or directory 00:42:31.815 [2024-11-06 10:34:35.157677] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:31.815 [2024-11-06 10:34:35.157690] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:31.815 [2024-11-06 10:34:35.157697] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:31.815 [2024-11-06 10:34:35.157702] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:31.815 [2024-11-06 10:34:35.157707] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:31.815 request: 00:42:31.815 { 00:42:31.815 "name": "nvme0", 00:42:31.815 "trtype": "tcp", 00:42:31.815 "traddr": "127.0.0.1", 00:42:31.815 "adrfam": "ipv4", 00:42:31.815 "trsvcid": "4420", 00:42:31.815 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:31.816 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:31.816 "prchk_reftag": false, 00:42:31.816 "prchk_guard": false, 00:42:31.816 "hdgst": false, 00:42:31.816 "ddgst": false, 00:42:31.816 "psk": "key0", 00:42:31.816 "allow_unrecognized_csi": false, 00:42:31.816 "method": "bdev_nvme_attach_controller", 00:42:31.816 "req_id": 1 00:42:31.816 } 00:42:31.816 Got JSON-RPC error response 00:42:31.816 response: 00:42:31.816 { 00:42:31.816 "code": -19, 00:42:31.816 "message": "No such device" 00:42:31.816 } 00:42:31.816 10:34:35 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:31.816 10:34:35 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:31.816 10:34:35 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:31.816 10:34:35 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:31.816 10:34:35 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:31.816 10:34:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:32.075 10:34:35 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:32.075 10:34:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:32.075 10:34:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:32.075 10:34:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:32.075 10:34:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:32.075 10:34:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:32.075 10:34:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FdpTXbN2D6 00:42:32.075 10:34:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:32.075 10:34:35 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:32.075 10:34:35 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:32.075 10:34:35 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:32.075 10:34:35 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:32.075 10:34:35 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:32.075 10:34:35 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:32.075 10:34:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FdpTXbN2D6 00:42:32.075 10:34:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FdpTXbN2D6 00:42:32.075 10:34:35 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.FdpTXbN2D6 00:42:32.075 10:34:35 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FdpTXbN2D6 00:42:32.075 10:34:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FdpTXbN2D6 00:42:32.075 10:34:35 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:32.075 10:34:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:32.335 nvme0n1 00:42:32.335 10:34:35 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:32.335 10:34:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:32.335 10:34:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:32.335 10:34:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:32.335 10:34:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:32.335 10:34:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.593 10:34:35 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:32.593 10:34:35 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:32.593 10:34:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:32.853 10:34:36 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:32.853 10:34:36 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:32.853 10:34:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:32.853 10:34:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.853 10:34:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:32.853 10:34:36 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:32.853 10:34:36 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:32.853 10:34:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:32.853 10:34:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:32.853 10:34:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:32.853 10:34:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.853 10:34:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:33.112 10:34:36 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:33.112 10:34:36 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:33.112 10:34:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:33.372 10:34:36 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:33.372 10:34:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:33.372 10:34:36 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:33.372 10:34:36 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:33.372 10:34:36 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FdpTXbN2D6 00:42:33.372 10:34:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FdpTXbN2D6 00:42:33.630 10:34:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eQbYoFtLn0 00:42:33.630 10:34:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eQbYoFtLn0 00:42:33.889 10:34:37 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:33.889 10:34:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:34.148 nvme0n1 00:42:34.148 10:34:37 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:34.149 10:34:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:34.409 10:34:37 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:34.409 "subsystems": [ 00:42:34.409 { 00:42:34.409 "subsystem": "keyring", 00:42:34.409 "config": [ 00:42:34.409 { 00:42:34.409 "method": "keyring_file_add_key", 00:42:34.409 "params": { 00:42:34.409 "name": "key0", 00:42:34.409 "path": "/tmp/tmp.FdpTXbN2D6" 00:42:34.409 } 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "method": "keyring_file_add_key", 00:42:34.410 "params": { 00:42:34.410 "name": "key1", 00:42:34.410 "path": "/tmp/tmp.eQbYoFtLn0" 00:42:34.410 } 00:42:34.410 } 00:42:34.410 ] 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "subsystem": "iobuf", 00:42:34.410 "config": [ 00:42:34.410 { 00:42:34.410 "method": "iobuf_set_options", 00:42:34.410 "params": { 00:42:34.410 "small_pool_count": 8192, 00:42:34.410 "large_pool_count": 1024, 00:42:34.410 "small_bufsize": 8192, 00:42:34.410 "large_bufsize": 135168, 00:42:34.410 "enable_numa": false 00:42:34.410 } 00:42:34.410 } 00:42:34.410 ] 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "subsystem": "sock", 00:42:34.410 "config": [ 00:42:34.410 { 00:42:34.410 "method": "sock_set_default_impl", 00:42:34.410 "params": { 00:42:34.410 "impl_name": "posix" 00:42:34.410 } 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "method": "sock_impl_set_options", 00:42:34.410 "params": { 00:42:34.410 "impl_name": "ssl", 00:42:34.410 "recv_buf_size": 4096, 00:42:34.410 "send_buf_size": 4096, 00:42:34.410 "enable_recv_pipe": true, 00:42:34.410 "enable_quickack": false, 00:42:34.410 "enable_placement_id": 0, 00:42:34.410 "enable_zerocopy_send_server": true, 00:42:34.410 "enable_zerocopy_send_client": false, 00:42:34.410 "zerocopy_threshold": 0, 00:42:34.410 "tls_version": 0, 00:42:34.410 "enable_ktls": false 00:42:34.410 } 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "method": "sock_impl_set_options", 00:42:34.410 "params": { 00:42:34.410 "impl_name": "posix", 00:42:34.410 "recv_buf_size": 2097152, 00:42:34.410 "send_buf_size": 2097152, 00:42:34.410 "enable_recv_pipe": true, 00:42:34.410 "enable_quickack": false, 00:42:34.410 "enable_placement_id": 0, 00:42:34.410 "enable_zerocopy_send_server": true, 00:42:34.410 "enable_zerocopy_send_client": false, 00:42:34.410 "zerocopy_threshold": 0, 00:42:34.410 "tls_version": 0, 00:42:34.410 "enable_ktls": false 00:42:34.410 } 00:42:34.410 } 00:42:34.410 ] 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "subsystem": "vmd", 00:42:34.410 "config": [] 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "subsystem": "accel", 00:42:34.410 "config": [ 00:42:34.410 { 00:42:34.410 "method": "accel_set_options", 00:42:34.410 "params": { 00:42:34.410 "small_cache_size": 128, 00:42:34.410 "large_cache_size": 16, 00:42:34.410 "task_count": 2048, 00:42:34.410 "sequence_count": 2048, 00:42:34.410 "buf_count": 2048 00:42:34.410 } 00:42:34.410 } 00:42:34.410 ] 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "subsystem": "bdev", 00:42:34.410 "config": [ 00:42:34.410 { 00:42:34.410 "method": "bdev_set_options", 00:42:34.410 "params": { 00:42:34.410 "bdev_io_pool_size": 65535, 00:42:34.410 "bdev_io_cache_size": 256, 00:42:34.410 "bdev_auto_examine": true, 00:42:34.410 "iobuf_small_cache_size": 128, 00:42:34.410 "iobuf_large_cache_size": 16 00:42:34.410 } 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "method": "bdev_raid_set_options", 00:42:34.410 "params": { 00:42:34.410 "process_window_size_kb": 1024, 00:42:34.410 "process_max_bandwidth_mb_sec": 0 00:42:34.410 } 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "method": "bdev_iscsi_set_options", 00:42:34.410 "params": { 00:42:34.410 "timeout_sec": 30 00:42:34.410 } 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "method": "bdev_nvme_set_options", 00:42:34.410 "params": { 00:42:34.410 "action_on_timeout": "none", 00:42:34.410 "timeout_us": 0, 00:42:34.410 "timeout_admin_us": 0, 00:42:34.410 "keep_alive_timeout_ms": 10000, 00:42:34.410 "arbitration_burst": 0, 00:42:34.410 "low_priority_weight": 0, 00:42:34.410 "medium_priority_weight": 0, 00:42:34.410 "high_priority_weight": 0, 00:42:34.410 "nvme_adminq_poll_period_us": 10000, 00:42:34.410 "nvme_ioq_poll_period_us": 0, 00:42:34.410 "io_queue_requests": 512, 00:42:34.410 "delay_cmd_submit": true, 00:42:34.410 "transport_retry_count": 4, 00:42:34.410 "bdev_retry_count": 3, 00:42:34.410 "transport_ack_timeout": 0, 00:42:34.410 "ctrlr_loss_timeout_sec": 0, 00:42:34.410 "reconnect_delay_sec": 0, 00:42:34.410 "fast_io_fail_timeout_sec": 0, 00:42:34.410 "disable_auto_failback": false, 00:42:34.410 "generate_uuids": false, 00:42:34.410 "transport_tos": 0, 00:42:34.410 "nvme_error_stat": false, 00:42:34.410 "rdma_srq_size": 0, 00:42:34.410 "io_path_stat": false, 00:42:34.410 "allow_accel_sequence": false, 00:42:34.410 "rdma_max_cq_size": 0, 00:42:34.410 "rdma_cm_event_timeout_ms": 0, 00:42:34.410 "dhchap_digests": [ 00:42:34.410 "sha256", 00:42:34.410 "sha384", 00:42:34.410 "sha512" 00:42:34.410 ], 00:42:34.410 "dhchap_dhgroups": [ 00:42:34.410 "null", 00:42:34.410 "ffdhe2048", 00:42:34.410 "ffdhe3072", 00:42:34.410 "ffdhe4096", 00:42:34.410 "ffdhe6144", 00:42:34.410 "ffdhe8192" 00:42:34.410 ] 00:42:34.410 } 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "method": "bdev_nvme_attach_controller", 00:42:34.410 "params": { 00:42:34.410 "name": "nvme0", 00:42:34.410 "trtype": "TCP", 00:42:34.410 "adrfam": "IPv4", 00:42:34.410 "traddr": "127.0.0.1", 00:42:34.410 "trsvcid": "4420", 00:42:34.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:34.410 "prchk_reftag": false, 00:42:34.410 "prchk_guard": false, 00:42:34.410 "ctrlr_loss_timeout_sec": 0, 00:42:34.410 "reconnect_delay_sec": 0, 00:42:34.410 "fast_io_fail_timeout_sec": 0, 00:42:34.410 "psk": "key0", 00:42:34.410 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:34.410 "hdgst": false, 00:42:34.410 "ddgst": false, 00:42:34.410 "multipath": "multipath" 00:42:34.410 } 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "method": "bdev_nvme_set_hotplug", 00:42:34.410 "params": { 00:42:34.410 "period_us": 100000, 00:42:34.410 "enable": false 00:42:34.410 } 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "method": "bdev_wait_for_examine" 00:42:34.410 } 00:42:34.410 ] 00:42:34.410 }, 00:42:34.410 { 00:42:34.410 "subsystem": "nbd", 00:42:34.410 "config": [] 00:42:34.410 } 00:42:34.410 ] 00:42:34.410 }' 00:42:34.410 10:34:37 keyring_file -- keyring/file.sh@115 -- # killprocess 57579 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 57579 ']' 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@956 -- # kill -0 57579 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57579 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57579' 00:42:34.410 killing process with pid 57579 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@971 -- # kill 57579 00:42:34.410 Received shutdown signal, test time was about 1.000000 seconds 00:42:34.410 00:42:34.410 Latency(us) 00:42:34.410 [2024-11-06T09:34:37.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:34.410 [2024-11-06T09:34:37.911Z] =================================================================================================================== 00:42:34.410 [2024-11-06T09:34:37.911Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@976 -- # wait 57579 00:42:34.410 10:34:37 keyring_file -- keyring/file.sh@118 -- # bperfpid=59395 00:42:34.410 10:34:37 keyring_file -- keyring/file.sh@120 -- # waitforlisten 59395 /var/tmp/bperf.sock 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 59395 ']' 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:34.410 10:34:37 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:34.410 10:34:37 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:34.411 10:34:37 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:34.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:34.411 10:34:37 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:34.411 10:34:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:34.411 10:34:37 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:34.411 "subsystems": [ 00:42:34.411 { 00:42:34.411 "subsystem": "keyring", 00:42:34.411 "config": [ 00:42:34.411 { 00:42:34.411 "method": "keyring_file_add_key", 00:42:34.411 "params": { 00:42:34.411 "name": "key0", 00:42:34.411 "path": "/tmp/tmp.FdpTXbN2D6" 00:42:34.411 } 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "method": "keyring_file_add_key", 00:42:34.411 "params": { 00:42:34.411 "name": "key1", 00:42:34.411 "path": "/tmp/tmp.eQbYoFtLn0" 00:42:34.411 } 00:42:34.411 } 00:42:34.411 ] 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "subsystem": "iobuf", 00:42:34.411 "config": [ 00:42:34.411 { 00:42:34.411 "method": "iobuf_set_options", 00:42:34.411 "params": { 00:42:34.411 "small_pool_count": 8192, 00:42:34.411 "large_pool_count": 1024, 00:42:34.411 "small_bufsize": 8192, 00:42:34.411 "large_bufsize": 135168, 00:42:34.411 "enable_numa": false 00:42:34.411 } 00:42:34.411 } 00:42:34.411 ] 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "subsystem": "sock", 00:42:34.411 "config": [ 00:42:34.411 { 00:42:34.411 "method": "sock_set_default_impl", 00:42:34.411 "params": { 00:42:34.411 "impl_name": "posix" 00:42:34.411 } 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "method": "sock_impl_set_options", 00:42:34.411 "params": { 00:42:34.411 "impl_name": "ssl", 00:42:34.411 "recv_buf_size": 4096, 00:42:34.411 "send_buf_size": 4096, 00:42:34.411 "enable_recv_pipe": true, 00:42:34.411 "enable_quickack": false, 00:42:34.411 "enable_placement_id": 0, 00:42:34.411 "enable_zerocopy_send_server": true, 00:42:34.411 "enable_zerocopy_send_client": false, 00:42:34.411 "zerocopy_threshold": 0, 00:42:34.411 "tls_version": 0, 00:42:34.411 "enable_ktls": false 00:42:34.411 } 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "method": "sock_impl_set_options", 00:42:34.411 "params": { 00:42:34.411 "impl_name": "posix", 00:42:34.411 "recv_buf_size": 2097152, 00:42:34.411 "send_buf_size": 2097152, 00:42:34.411 "enable_recv_pipe": true, 00:42:34.411 "enable_quickack": false, 00:42:34.411 "enable_placement_id": 0, 00:42:34.411 "enable_zerocopy_send_server": true, 00:42:34.411 "enable_zerocopy_send_client": false, 00:42:34.411 "zerocopy_threshold": 0, 00:42:34.411 "tls_version": 0, 00:42:34.411 "enable_ktls": false 00:42:34.411 } 00:42:34.411 } 00:42:34.411 ] 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "subsystem": "vmd", 00:42:34.411 "config": [] 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "subsystem": "accel", 00:42:34.411 "config": [ 00:42:34.411 { 00:42:34.411 "method": "accel_set_options", 00:42:34.411 "params": { 00:42:34.411 "small_cache_size": 128, 00:42:34.411 "large_cache_size": 16, 00:42:34.411 "task_count": 2048, 00:42:34.411 "sequence_count": 2048, 00:42:34.411 "buf_count": 2048 00:42:34.411 } 00:42:34.411 } 00:42:34.411 ] 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "subsystem": "bdev", 00:42:34.411 "config": [ 00:42:34.411 { 00:42:34.411 "method": "bdev_set_options", 00:42:34.411 "params": { 00:42:34.411 "bdev_io_pool_size": 65535, 00:42:34.411 "bdev_io_cache_size": 256, 00:42:34.411 "bdev_auto_examine": true, 00:42:34.411 "iobuf_small_cache_size": 128, 00:42:34.411 "iobuf_large_cache_size": 16 00:42:34.411 } 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "method": "bdev_raid_set_options", 00:42:34.411 "params": { 00:42:34.411 "process_window_size_kb": 1024, 00:42:34.411 "process_max_bandwidth_mb_sec": 0 00:42:34.411 } 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "method": "bdev_iscsi_set_options", 00:42:34.411 "params": { 00:42:34.411 "timeout_sec": 30 00:42:34.411 } 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "method": "bdev_nvme_set_options", 00:42:34.411 "params": { 00:42:34.411 "action_on_timeout": "none", 00:42:34.411 "timeout_us": 0, 00:42:34.411 "timeout_admin_us": 0, 00:42:34.411 "keep_alive_timeout_ms": 10000, 00:42:34.411 "arbitration_burst": 0, 00:42:34.411 "low_priority_weight": 0, 00:42:34.411 "medium_priority_weight": 0, 00:42:34.411 "high_priority_weight": 0, 00:42:34.411 "nvme_adminq_poll_period_us": 10000, 00:42:34.411 "nvme_ioq_poll_period_us": 0, 00:42:34.411 "io_queue_requests": 512, 00:42:34.411 "delay_cmd_submit": true, 00:42:34.411 "transport_retry_count": 4, 00:42:34.411 "bdev_retry_count": 3, 00:42:34.411 "transport_ack_timeout": 0, 00:42:34.411 "ctrlr_loss_timeout_sec": 0, 00:42:34.411 "reconnect_delay_sec": 0, 00:42:34.411 "fast_io_fail_timeout_sec": 0, 00:42:34.411 "disable_auto_failback": false, 00:42:34.411 "generate_uuids": false, 00:42:34.411 "transport_tos": 0, 00:42:34.411 "nvme_error_stat": false, 00:42:34.411 "rdma_srq_size": 0, 00:42:34.411 "io_path_stat": false, 00:42:34.411 "allow_accel_sequence": false, 00:42:34.411 "rdma_max_cq_size": 0, 00:42:34.411 "rdma_cm_event_timeout_ms": 0, 00:42:34.411 "dhchap_digests": [ 00:42:34.411 "sha256", 00:42:34.411 "sha384", 00:42:34.411 "sha512" 00:42:34.411 ], 00:42:34.411 "dhchap_dhgroups": [ 00:42:34.411 "null", 00:42:34.411 "ffdhe2048", 00:42:34.411 "ffdhe3072", 00:42:34.411 "ffdhe4096", 00:42:34.411 "ffdhe6144", 00:42:34.411 "ffdhe8192" 00:42:34.411 ] 00:42:34.411 } 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "method": "bdev_nvme_attach_controller", 00:42:34.411 "params": { 00:42:34.411 "name": "nvme0", 00:42:34.411 "trtype": "TCP", 00:42:34.411 "adrfam": "IPv4", 00:42:34.411 "traddr": "127.0.0.1", 00:42:34.411 "trsvcid": "4420", 00:42:34.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:34.411 "prchk_reftag": false, 00:42:34.411 "prchk_guard": false, 00:42:34.411 "ctrlr_loss_timeout_sec": 0, 00:42:34.411 "reconnect_delay_sec": 0, 00:42:34.411 "fast_io_fail_timeout_sec": 0, 00:42:34.411 "psk": "key0", 00:42:34.411 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:34.411 "hdgst": false, 00:42:34.411 "ddgst": false, 00:42:34.411 "multipath": "multipath" 00:42:34.411 } 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "method": "bdev_nvme_set_hotplug", 00:42:34.411 "params": { 00:42:34.411 "period_us": 100000, 00:42:34.411 "enable": false 00:42:34.411 } 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "method": "bdev_wait_for_examine" 00:42:34.411 } 00:42:34.411 ] 00:42:34.411 }, 00:42:34.411 { 00:42:34.411 "subsystem": "nbd", 00:42:34.411 "config": [] 00:42:34.411 } 00:42:34.411 ] 00:42:34.411 }' 00:42:34.411 [2024-11-06 10:34:37.868017] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:34.411 [2024-11-06 10:34:37.868072] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59395 ] 00:42:34.671 [2024-11-06 10:34:37.955835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:34.671 [2024-11-06 10:34:37.985900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:34.671 [2024-11-06 10:34:38.129205] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:35.238 10:34:38 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:35.238 10:34:38 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:42:35.238 10:34:38 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:35.238 10:34:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.238 10:34:38 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:35.497 10:34:38 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:35.497 10:34:38 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:35.497 10:34:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:35.497 10:34:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:35.497 10:34:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:35.497 10:34:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.497 10:34:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:35.756 10:34:39 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:35.756 10:34:39 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:35.756 10:34:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:35.756 10:34:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:35.756 10:34:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:35.756 10:34:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:35.756 10:34:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.756 10:34:39 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:35.756 10:34:39 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:35.756 10:34:39 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:35.756 10:34:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:36.016 10:34:39 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:36.016 10:34:39 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:36.016 10:34:39 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FdpTXbN2D6 /tmp/tmp.eQbYoFtLn0 00:42:36.016 10:34:39 keyring_file -- keyring/file.sh@20 -- # killprocess 59395 00:42:36.016 10:34:39 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 59395 ']' 00:42:36.016 10:34:39 keyring_file -- common/autotest_common.sh@956 -- # kill -0 59395 00:42:36.016 10:34:39 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:36.016 10:34:39 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:36.016 10:34:39 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59395 00:42:36.016 10:34:39 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:36.016 10:34:39 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:36.016 10:34:39 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59395' 00:42:36.016 killing process with pid 59395 00:42:36.016 10:34:39 keyring_file -- common/autotest_common.sh@971 -- # kill 59395 00:42:36.016 Received shutdown signal, test time was about 1.000000 seconds 00:42:36.016 00:42:36.016 Latency(us) 00:42:36.016 [2024-11-06T09:34:39.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:36.016 [2024-11-06T09:34:39.517Z] =================================================================================================================== 00:42:36.016 [2024-11-06T09:34:39.517Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:36.016 10:34:39 keyring_file -- common/autotest_common.sh@976 -- # wait 59395 00:42:36.282 10:34:39 keyring_file -- keyring/file.sh@21 -- # killprocess 57492 00:42:36.282 10:34:39 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 57492 ']' 00:42:36.282 10:34:39 keyring_file -- common/autotest_common.sh@956 -- # kill -0 57492 00:42:36.282 10:34:39 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:36.282 10:34:39 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:36.282 10:34:39 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57492 00:42:36.282 10:34:39 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:36.282 10:34:39 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:36.282 10:34:39 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57492' 00:42:36.282 killing process with pid 57492 00:42:36.282 10:34:39 keyring_file -- common/autotest_common.sh@971 -- # kill 57492 00:42:36.282 10:34:39 keyring_file -- common/autotest_common.sh@976 -- # wait 57492 00:42:36.596 00:42:36.596 real 0m11.773s 00:42:36.596 user 0m28.355s 00:42:36.596 sys 0m2.640s 00:42:36.596 10:34:39 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:36.596 10:34:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:36.596 ************************************ 00:42:36.596 END TEST keyring_file 00:42:36.596 ************************************ 00:42:36.596 10:34:39 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:42:36.596 10:34:39 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:36.596 10:34:39 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:42:36.596 10:34:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:36.596 10:34:39 -- common/autotest_common.sh@10 -- # set +x 00:42:36.596 ************************************ 00:42:36.596 START TEST keyring_linux 00:42:36.596 ************************************ 00:42:36.596 10:34:39 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:36.596 Joined session keyring: 484910617 00:42:36.596 * Looking for test storage... 00:42:36.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:36.596 10:34:39 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:36.596 10:34:39 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:42:36.596 10:34:39 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:36.596 10:34:40 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:36.596 10:34:40 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:36.906 10:34:40 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:36.906 10:34:40 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:36.906 10:34:40 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:36.906 10:34:40 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:36.906 10:34:40 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:36.906 10:34:40 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:36.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.906 --rc genhtml_branch_coverage=1 00:42:36.906 --rc genhtml_function_coverage=1 00:42:36.906 --rc genhtml_legend=1 00:42:36.906 --rc geninfo_all_blocks=1 00:42:36.906 --rc geninfo_unexecuted_blocks=1 00:42:36.906 00:42:36.906 ' 00:42:36.906 10:34:40 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:36.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.906 --rc genhtml_branch_coverage=1 00:42:36.906 --rc genhtml_function_coverage=1 00:42:36.906 --rc genhtml_legend=1 00:42:36.906 --rc geninfo_all_blocks=1 00:42:36.906 --rc geninfo_unexecuted_blocks=1 00:42:36.906 00:42:36.906 ' 00:42:36.906 10:34:40 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:36.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.906 --rc genhtml_branch_coverage=1 00:42:36.906 --rc genhtml_function_coverage=1 00:42:36.906 --rc genhtml_legend=1 00:42:36.906 --rc geninfo_all_blocks=1 00:42:36.906 --rc geninfo_unexecuted_blocks=1 00:42:36.906 00:42:36.906 ' 00:42:36.906 10:34:40 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:36.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.906 --rc genhtml_branch_coverage=1 00:42:36.906 --rc genhtml_function_coverage=1 00:42:36.906 --rc genhtml_legend=1 00:42:36.906 --rc geninfo_all_blocks=1 00:42:36.906 --rc geninfo_unexecuted_blocks=1 00:42:36.906 00:42:36.906 ' 00:42:36.906 10:34:40 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:36.906 10:34:40 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:36.906 10:34:40 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:36.906 10:34:40 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:36.906 10:34:40 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:36.906 10:34:40 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:36.906 10:34:40 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:36.906 10:34:40 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.906 10:34:40 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.907 10:34:40 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.907 10:34:40 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:36.907 10:34:40 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:36.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:36.907 10:34:40 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:36.907 10:34:40 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:36.907 10:34:40 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:36.907 10:34:40 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:36.907 10:34:40 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:36.907 10:34:40 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:36.907 /tmp/:spdk-test:key0 00:42:36.907 10:34:40 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:36.907 10:34:40 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:36.907 10:34:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:36.907 /tmp/:spdk-test:key1 00:42:36.907 10:34:40 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=59834 00:42:36.907 10:34:40 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 59834 00:42:36.907 10:34:40 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:36.907 10:34:40 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 59834 ']' 00:42:36.907 10:34:40 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:36.907 10:34:40 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:36.907 10:34:40 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:36.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:36.907 10:34:40 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:36.907 10:34:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:36.907 [2024-11-06 10:34:40.288307] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:36.907 [2024-11-06 10:34:40.288386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59834 ] 00:42:36.907 [2024-11-06 10:34:40.374731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:37.168 [2024-11-06 10:34:40.418302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:37.737 10:34:41 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:37.737 10:34:41 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:42:37.737 10:34:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:37.737 10:34:41 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:37.737 10:34:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:37.737 [2024-11-06 10:34:41.068703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:37.737 null0 00:42:37.737 [2024-11-06 10:34:41.100739] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:37.737 [2024-11-06 10:34:41.101164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:37.737 10:34:41 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:37.737 10:34:41 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:37.737 507416803 00:42:37.737 10:34:41 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:37.737 959969631 00:42:37.737 10:34:41 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=60115 00:42:37.737 10:34:41 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 60115 /var/tmp/bperf.sock 00:42:37.737 10:34:41 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:37.737 10:34:41 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 60115 ']' 00:42:37.737 10:34:41 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:37.737 10:34:41 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:37.737 10:34:41 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:37.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:37.737 10:34:41 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:37.737 10:34:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:37.737 [2024-11-06 10:34:41.177326] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:37.737 [2024-11-06 10:34:41.177384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60115 ] 00:42:37.995 [2024-11-06 10:34:41.264802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:37.996 [2024-11-06 10:34:41.295061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:38.563 10:34:41 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:38.563 10:34:41 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:42:38.563 10:34:41 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:38.563 10:34:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:38.823 10:34:42 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:38.823 10:34:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:38.823 10:34:42 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:38.823 10:34:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:39.082 [2024-11-06 10:34:42.471509] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:39.082 nvme0n1 00:42:39.082 10:34:42 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:39.082 10:34:42 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:39.082 10:34:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:39.082 10:34:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:39.082 10:34:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:39.082 10:34:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.341 10:34:42 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:39.341 10:34:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:39.341 10:34:42 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:39.341 10:34:42 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:39.341 10:34:42 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.341 10:34:42 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:39.341 10:34:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.601 10:34:42 keyring_linux -- keyring/linux.sh@25 -- # sn=507416803 00:42:39.601 10:34:42 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:39.601 10:34:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:39.601 10:34:42 keyring_linux -- keyring/linux.sh@26 -- # [[ 507416803 == \5\0\7\4\1\6\8\0\3 ]] 00:42:39.601 10:34:42 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 507416803 00:42:39.601 10:34:42 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:39.601 10:34:42 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:39.601 Running I/O for 1 seconds... 00:42:40.539 16903.00 IOPS, 66.03 MiB/s 00:42:40.540 Latency(us) 00:42:40.540 [2024-11-06T09:34:44.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:40.540 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:40.540 nvme0n1 : 1.01 16903.14 66.03 0.00 0.00 7540.85 6772.05 14090.24 00:42:40.540 [2024-11-06T09:34:44.041Z] =================================================================================================================== 00:42:40.540 [2024-11-06T09:34:44.041Z] Total : 16903.14 66.03 0.00 0.00 7540.85 6772.05 14090.24 00:42:40.540 { 00:42:40.540 "results": [ 00:42:40.540 { 00:42:40.540 "job": "nvme0n1", 00:42:40.540 "core_mask": "0x2", 00:42:40.540 "workload": "randread", 00:42:40.540 "status": "finished", 00:42:40.540 "queue_depth": 128, 00:42:40.540 "io_size": 4096, 00:42:40.540 "runtime": 1.007564, 00:42:40.540 "iops": 16903.144614138655, 00:42:40.540 "mibps": 66.02790864897912, 00:42:40.540 "io_failed": 0, 00:42:40.540 "io_timeout": 0, 00:42:40.540 "avg_latency_us": 7540.848795725443, 00:42:40.540 "min_latency_us": 6772.053333333333, 00:42:40.540 "max_latency_us": 14090.24 00:42:40.540 } 00:42:40.540 ], 00:42:40.540 "core_count": 1 00:42:40.540 } 00:42:40.799 10:34:44 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:40.799 10:34:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:40.799 10:34:44 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:40.799 10:34:44 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:40.799 10:34:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:40.799 10:34:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:40.799 10:34:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.799 10:34:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:41.059 10:34:44 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:41.059 10:34:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:41.059 10:34:44 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:41.059 10:34:44 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:41.059 10:34:44 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:42:41.059 10:34:44 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:41.059 10:34:44 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:41.059 10:34:44 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:41.059 10:34:44 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:41.059 10:34:44 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:41.059 10:34:44 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:41.059 10:34:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:41.059 [2024-11-06 10:34:44.551233] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:41.059 [2024-11-06 10:34:44.551505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1869270 (107): Transport endpoint is not connected 00:42:41.059 [2024-11-06 10:34:44.552501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1869270 (9): Bad file descriptor 00:42:41.060 [2024-11-06 10:34:44.553504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:41.060 [2024-11-06 10:34:44.553512] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:41.060 [2024-11-06 10:34:44.553518] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:41.060 [2024-11-06 10:34:44.553528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:41.060 request: 00:42:41.060 { 00:42:41.060 "name": "nvme0", 00:42:41.060 "trtype": "tcp", 00:42:41.060 "traddr": "127.0.0.1", 00:42:41.060 "adrfam": "ipv4", 00:42:41.060 "trsvcid": "4420", 00:42:41.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:41.060 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:41.060 "prchk_reftag": false, 00:42:41.060 "prchk_guard": false, 00:42:41.060 "hdgst": false, 00:42:41.060 "ddgst": false, 00:42:41.060 "psk": ":spdk-test:key1", 00:42:41.060 "allow_unrecognized_csi": false, 00:42:41.060 "method": "bdev_nvme_attach_controller", 00:42:41.060 "req_id": 1 00:42:41.060 } 00:42:41.060 Got JSON-RPC error response 00:42:41.060 response: 00:42:41.060 { 00:42:41.060 "code": -5, 00:42:41.060 "message": "Input/output error" 00:42:41.060 } 00:42:41.319 10:34:44 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:42:41.319 10:34:44 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:41.319 10:34:44 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:41.319 10:34:44 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@33 -- # sn=507416803 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 507416803 00:42:41.319 1 links removed 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@33 -- # sn=959969631 00:42:41.319 10:34:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 959969631 00:42:41.319 1 links removed 00:42:41.320 10:34:44 keyring_linux -- keyring/linux.sh@41 -- # killprocess 60115 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 60115 ']' 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 60115 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60115 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60115' 00:42:41.320 killing process with pid 60115 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@971 -- # kill 60115 00:42:41.320 Received shutdown signal, test time was about 1.000000 seconds 00:42:41.320 00:42:41.320 Latency(us) 00:42:41.320 [2024-11-06T09:34:44.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:41.320 [2024-11-06T09:34:44.821Z] =================================================================================================================== 00:42:41.320 [2024-11-06T09:34:44.821Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@976 -- # wait 60115 00:42:41.320 10:34:44 keyring_linux -- keyring/linux.sh@42 -- # killprocess 59834 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 59834 ']' 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 59834 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59834 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59834' 00:42:41.320 killing process with pid 59834 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@971 -- # kill 59834 00:42:41.320 10:34:44 keyring_linux -- common/autotest_common.sh@976 -- # wait 59834 00:42:41.579 00:42:41.579 real 0m5.153s 00:42:41.579 user 0m9.486s 00:42:41.579 sys 0m1.438s 00:42:41.579 10:34:45 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:41.579 10:34:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:41.579 ************************************ 00:42:41.579 END TEST keyring_linux 00:42:41.579 ************************************ 00:42:41.579 10:34:45 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:42:41.579 10:34:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:41.579 10:34:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:41.579 10:34:45 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:42:41.579 10:34:45 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:42:41.579 10:34:45 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:42:41.579 10:34:45 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:41.579 10:34:45 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:41.579 10:34:45 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:41.579 10:34:45 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:42:41.579 10:34:45 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:41.579 10:34:45 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:42:41.579 10:34:45 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:41.579 10:34:45 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:41.579 10:34:45 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:42:41.579 10:34:45 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:42:41.579 10:34:45 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:42:41.579 10:34:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:41.579 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:42:41.579 10:34:45 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:42:41.579 10:34:45 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:42:41.579 10:34:45 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:42:41.579 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:42:49.714 INFO: APP EXITING 00:42:49.714 INFO: killing all VMs 00:42:49.714 INFO: killing vhost app 00:42:49.714 INFO: EXIT DONE 00:42:53.914 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:42:53.914 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:42:53.914 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:42:53.914 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:42:53.914 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:42:53.914 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:42:53.914 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:42:53.914 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:42:53.914 0000:65:00.0 (144d a80a): Already using the nvme driver 00:42:53.914 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:42:53.914 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:42:53.914 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:42:53.914 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:42:53.915 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:42:53.915 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:42:53.915 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:42:53.915 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:42:58.121 Cleaning 00:42:58.121 Removing: /var/run/dpdk/spdk0/config 00:42:58.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:58.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:58.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:58.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:58.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:58.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:58.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:58.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:58.121 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:58.121 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:58.121 Removing: /var/run/dpdk/spdk1/config 00:42:58.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:58.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:58.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:58.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:58.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:58.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:58.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:58.121 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:58.121 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:58.121 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:58.121 Removing: /var/run/dpdk/spdk2/config 00:42:58.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:58.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:58.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:58.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:58.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:58.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:58.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:58.121 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:58.121 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:58.121 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:58.121 Removing: /var/run/dpdk/spdk3/config 00:42:58.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:58.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:58.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:58.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:58.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:58.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:58.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:58.121 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:58.121 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:58.121 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:58.121 Removing: /var/run/dpdk/spdk4/config 00:42:58.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:58.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:58.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:58.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:58.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:58.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:58.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:58.121 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:58.121 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:58.121 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:58.121 Removing: /dev/shm/bdev_svc_trace.1 00:42:58.121 Removing: /dev/shm/nvmf_trace.0 00:42:58.121 Removing: /dev/shm/spdk_tgt_trace.pid3636794 00:42:58.121 Removing: /var/run/dpdk/spdk0 00:42:58.121 Removing: /var/run/dpdk/spdk1 00:42:58.121 Removing: /var/run/dpdk/spdk2 00:42:58.121 Removing: /var/run/dpdk/spdk3 00:42:58.121 Removing: /var/run/dpdk/spdk4 00:42:58.121 Removing: /var/run/dpdk/spdk_pid11153 00:42:58.121 Removing: /var/run/dpdk/spdk_pid18949 00:42:58.121 Removing: /var/run/dpdk/spdk_pid18954 00:42:58.121 Removing: /var/run/dpdk/spdk_pid2467 00:42:58.121 Removing: /var/run/dpdk/spdk_pid25645 00:42:58.121 Removing: /var/run/dpdk/spdk_pid28139 00:42:58.121 Removing: /var/run/dpdk/spdk_pid30356 00:42:58.121 Removing: /var/run/dpdk/spdk_pid31860 00:42:58.121 Removing: /var/run/dpdk/spdk_pid34284 00:42:58.121 Removing: /var/run/dpdk/spdk_pid36129 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3635086 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3636794 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3637436 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3638578 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3638813 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3640103 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3640228 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3640682 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3641730 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3642289 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3642680 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3643081 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3643494 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3643893 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3644245 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3644416 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3644685 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3646166 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3649877 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3650242 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3650612 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3650773 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3651319 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3651339 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3651725 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3652037 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3652401 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3652421 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3652777 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3652807 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3653465 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3653618 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3653994 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3659213 00:42:58.121 Removing: /var/run/dpdk/spdk_pid3665077 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3677669 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3678516 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3684421 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3684779 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3690519 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3697991 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3701935 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3715508 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3727564 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3729629 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3730653 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3753348 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3758839 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3819640 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3826720 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3834289 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3842848 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3842850 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3843856 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3844862 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3845866 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3846536 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3846612 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3846875 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3847135 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3847206 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3848210 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3849216 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3850224 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3850896 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3850902 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3851239 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3852678 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3854081 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3865215 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3901276 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3907914 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3909909 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3911949 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3912265 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3912287 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3912595 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3913084 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3915341 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3916192 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3916798 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3919422 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3920222 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3920936 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3926520 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3933725 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3933726 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3933727 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3938968 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3950384 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3955309 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3963323 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3964862 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3966672 00:42:58.381 Removing: /var/run/dpdk/spdk_pid3968189 00:42:58.641 Removing: /var/run/dpdk/spdk_pid3974489 00:42:58.641 Removing: /var/run/dpdk/spdk_pid3980399 00:42:58.642 Removing: /var/run/dpdk/spdk_pid3985797 00:42:58.642 Removing: /var/run/dpdk/spdk_pid3995949 00:42:58.642 Removing: /var/run/dpdk/spdk_pid3996052 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4001664 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4001994 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4002326 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4002667 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4002714 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4008794 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4009567 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4015526 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4019298 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4026391 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4033422 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4043871 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4053352 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4053355 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4079168 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4079957 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4080646 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4081331 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4082399 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4083103 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4083765 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4084446 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4090176 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4090510 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4098237 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4098431 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4105433 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4111141 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4123739 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4124416 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4130144 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4130499 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4135918 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4143299 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4146374 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4159548 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4171286 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4173393 00:42:58.642 Removing: /var/run/dpdk/spdk_pid4174621 00:42:58.642 Removing: /var/run/dpdk/spdk_pid47020 00:42:58.642 Removing: /var/run/dpdk/spdk_pid47679 00:42:58.642 Removing: /var/run/dpdk/spdk_pid48342 00:42:58.642 Removing: /var/run/dpdk/spdk_pid51415 00:42:58.642 Removing: /var/run/dpdk/spdk_pid51962 00:42:58.642 Removing: /var/run/dpdk/spdk_pid52439 00:42:58.642 Removing: /var/run/dpdk/spdk_pid57492 00:42:58.642 Removing: /var/run/dpdk/spdk_pid57579 00:42:58.642 Removing: /var/run/dpdk/spdk_pid59395 00:42:58.642 Removing: /var/run/dpdk/spdk_pid59834 00:42:58.642 Removing: /var/run/dpdk/spdk_pid60115 00:42:58.642 Removing: /var/run/dpdk/spdk_pid7725 00:42:58.642 Clean 00:42:58.903 10:35:02 -- common/autotest_common.sh@1451 -- # return 0 00:42:58.903 10:35:02 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:42:58.903 10:35:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:58.903 10:35:02 -- common/autotest_common.sh@10 -- # set +x 00:42:58.903 10:35:02 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:42:58.903 10:35:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:58.903 10:35:02 -- common/autotest_common.sh@10 -- # set +x 00:42:58.903 10:35:02 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:58.903 10:35:02 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:58.903 10:35:02 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:58.903 10:35:02 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:42:58.903 10:35:02 -- spdk/autotest.sh@394 -- # hostname 00:42:58.903 10:35:02 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:59.164 geninfo: WARNING: invalid characters removed from testname! 00:43:25.736 10:35:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:27.646 10:35:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:29.026 10:35:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:30.935 10:35:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:32.315 10:35:35 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:34.224 10:35:37 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:35.605 10:35:39 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:35.605 10:35:39 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:35.605 10:35:39 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:43:35.605 10:35:39 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:35.605 10:35:39 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:35.605 10:35:39 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:35.605 + [[ -n 3549419 ]] 00:43:35.605 + sudo kill 3549419 00:43:35.874 [Pipeline] } 00:43:35.885 [Pipeline] // stage 00:43:35.889 [Pipeline] } 00:43:35.899 [Pipeline] // timeout 00:43:35.904 [Pipeline] } 00:43:35.914 [Pipeline] // catchError 00:43:35.919 [Pipeline] } 00:43:35.931 [Pipeline] // wrap 00:43:35.935 [Pipeline] } 00:43:35.946 [Pipeline] // catchError 00:43:35.953 [Pipeline] stage 00:43:35.955 [Pipeline] { (Epilogue) 00:43:35.966 [Pipeline] catchError 00:43:35.967 [Pipeline] { 00:43:35.978 [Pipeline] echo 00:43:35.979 Cleanup processes 00:43:35.985 [Pipeline] sh 00:43:36.270 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:36.270 73572 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:36.282 [Pipeline] sh 00:43:36.564 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:36.564 ++ grep -v 'sudo pgrep' 00:43:36.564 ++ awk '{print $1}' 00:43:36.564 + sudo kill -9 00:43:36.564 + true 00:43:36.575 [Pipeline] sh 00:43:36.862 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:49.097 [Pipeline] sh 00:43:49.381 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:49.381 Artifacts sizes are good 00:43:49.395 [Pipeline] archiveArtifacts 00:43:49.403 Archiving artifacts 00:43:49.558 [Pipeline] sh 00:43:49.914 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:49.929 [Pipeline] cleanWs 00:43:49.939 [WS-CLEANUP] Deleting project workspace... 00:43:49.939 [WS-CLEANUP] Deferred wipeout is used... 00:43:49.946 [WS-CLEANUP] done 00:43:49.948 [Pipeline] } 00:43:49.965 [Pipeline] // catchError 00:43:49.976 [Pipeline] sh 00:43:50.262 + logger -p user.info -t JENKINS-CI 00:43:50.273 [Pipeline] } 00:43:50.286 [Pipeline] // stage 00:43:50.292 [Pipeline] } 00:43:50.306 [Pipeline] // node 00:43:50.311 [Pipeline] End of Pipeline 00:43:50.344 Finished: SUCCESS